Skip to content
Snippets Groups Projects
Commit f10f1994 authored by Brad Nelson's avatar Brad Nelson
Browse files

homology backend switching

parent 4287baeb
Branches
No related tags found
No related merge requests found
......@@ -269,6 +269,19 @@ x = torch.rand(n, d, dtype=torch.float).requires_grad_(True)
dgms, issublevelset = layer(x)
```
### Persistence Backends
There are several available algorithms for computing persistence available, which can be chosen by setting the `alg` keyword in all layers.
* `'hom'` (default) will run the standard reduction algorithm
* `'hom2'` will run the homology reduction algorithm, but will attempt to minimize nonzeros in a heuristic way
* `'cohom'` will run the cohomology algorithm
```python
layer = LevelSetLayer1D(size=10, sublevel=False, alg='cohom')
```
Different algorithms may give better performance, depending on the application, although `'hom'` currently performs fastest on some [simple benchmarks](examples/cpp/alg_comparison.py).
## Featurization Layers
Persistence diagrams are hard to work with directly in machine learning. We implement some easy to work with featurizations.
......
......@@ -26,7 +26,7 @@ std::vector<SparseF2Vec<int>> sorted_boundary(SimplicialComplex &X, size_t MAXDI
// should also use filtration_perm to permute nzs in rows of columns
std::vector<int> row_inds; // row indices for column
for (size_t j : X.filtration_perm ) {
if (X.dim(j) > MAXDIM+1) { continue; }
//if (X.dim(j) > MAXDIM+2) { continue; }
row_inds.clear(); // clear out column
// go through non-zeros in boundary
for (auto i : X.bdr[j].cochain.nzinds) {
......
......@@ -116,7 +116,7 @@ class SparseF2Vec{
// return offset element from last
T from_end(size_t offset) {
return nzinds.at(nzinds.size() - 1 - offset);
return nzinds[nzinds.size() - 1 - offset];
}
};
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment