Initial dataset upload
1bb6656
-
520 Bytes
Initial dataset upload
-
5.88 MB
Initial dataset upload
-
13.9 MB
Initial dataset upload
-
806 MB
Initial dataset upload
-
1.32 GB
Initial dataset upload
model.pickle
Detected Pickle imports (62)
- "sentence_transformers.models.Pooling.Pooling",
- "torch.nn.modules.activation.Tanh",
- "torch._C._nn.gelu",
- "cuml.internals.mem_type.MemoryType",
- "bertopic._bertopic.TopicMapper",
- "transformers.models.bert.modeling_bert.BertIntermediate",
- "sklearn.neighbors._kd_tree.KDTree",
- "torch.nn.modules.dropout.Dropout",
- "torch.nn.modules.linear.Linear",
- "cupy._core.core.array",
- "pylibraft.common.handle.Handle",
- "torch._utils._rebuild_tensor_v2",
- "transformers.models.bert.modeling_bert.BertLayer",
- "transformers.models.bert.modeling_bert.BertPooler",
- "cuml.common.array_descriptor.CumlArrayDescriptorMeta",
- "bertopic.backend._sentencetransformers.SentenceTransformerBackend",
- "builtins.getattr",
- "hdbscan.dist_metrics.EuclideanDistance",
- "collections.Counter",
- "cuml.manifold.umap.UMAP",
- "transformers.models.bert.modeling_bert.BertSelfOutput",
- "cuml.internals.array.CumlArray",
- "hdbscan.hdbscan_.HDBSCAN",
- "collections.OrderedDict",
- "transformers.models.bert.modeling_bert.BertEmbeddings",
- "transformers.models.bert.modeling_bert.BertSdpaSelfAttention",
- "bertopic._bertopic.BERTopic",
- "torch.storage._load_from_bytes",
- "numpy.int64",
- "torch.nn.modules.sparse.Embedding",
- "transformers.models.bert.modeling_bert.BertEncoder",
- "transformers.models.bert.modeling_bert.BertModel",
- "transformers.models.bert.modeling_bert.BertAttention",
- "tokenizers.models.Model",
- "sklearn.neighbors._kd_tree.newObj",
- "numpy.dtype",
- "sklearn.metrics._dist_metrics.newObj",
- "transformers.activations.GELUActivation",
- "hdbscan.dist_metrics.newObj",
- "torch.nn.modules.container.ModuleList",
- "sklearn.feature_extraction.text.CountVectorizer",
- "joblib.memory.Memory",
- "transformers.models.bert.modeling_bert.BertOutput",
- "tokenizers.Tokenizer",
- "hdbscan.prediction.PredictionData",
- "sklearn.metrics._dist_metrics.EuclideanDistance64",
- "sentence_transformers.models.Normalize.Normalize",
- "transformers.models.bert.tokenization_bert_fast.BertTokenizerFast",
- "sentence_transformers.models.Transformer.Transformer",
- "sentence_transformers.SentenceTransformer.SentenceTransformer",
- "torch._utils._rebuild_parameter",
- "numpy.core.multiarray._reconstruct",
- "scipy.sparse._csr.csr_matrix",
- "torch.nn.modules.normalization.LayerNorm",
- "tokenizers.AddedToken",
- "torch.torch_version.TorchVersion",
- "numpy.core.multiarray.scalar",
- "transformers.models.bert.configuration_bert.BertConfig",
- "sentence_transformers.model_card.SentenceTransformerModelCardData",
- "numpy.ndarray",
- "bertopic.vectorizers._ctfidf.ClassTfidfTransformer",
- "cupyx.scipy.sparse._coo.coo_matrix"
How to fix it?
134 MB
Initial dataset upload
-
5.36 MB
Initial dataset upload
-
7.53 kB
Initial dataset upload
-
5.72 kB
Initial dataset upload
-
129 kB
Initial dataset upload