function_name
stringlengths
1
57
function_code
stringlengths
20
4.99k
documentation
stringlengths
50
2k
language
stringclasses
5 values
file_path
stringlengths
8
166
line_number
int32
4
16.7k
parameters
listlengths
0
20
return_type
stringlengths
0
131
has_type_hints
bool
2 classes
complexity
int32
1
51
quality_score
float32
6
9.68
repo_name
stringclasses
34 values
repo_stars
int32
2.9k
242k
docstring_style
stringclasses
7 values
is_async
bool
2 classes
img_to_graph
def img_to_graph(img, *, mask=None, return_as=sparse.coo_matrix, dtype=None): """Graph of the pixel-to-pixel gradient connections. Edges are weighted with the gradient values. Read more in the :ref:`User Guide <image_feature_extraction>`. Parameters ---------- img : array-like of shape (height, width) or (height, width, channel) 2D or 3D image. mask : ndarray of shape (height, width) or \ (height, width, channel), dtype=bool, default=None An optional mask of the image, to consider only part of the pixels. return_as : np.ndarray or a sparse matrix class, \ default=sparse.coo_matrix The class to use to build the returned adjacency matrix. dtype : dtype, default=None The data of the returned sparse matrix. By default it is the dtype of img. Returns ------- graph : ndarray or a sparse matrix class The computed adjacency matrix. Examples -------- >>> import numpy as np >>> from sklearn.feature_extraction.image import img_to_graph >>> img = np.array([[0, 0], [0, 1]]) >>> img_to_graph(img, return_as=np.ndarray) array([[0, 0, 0, 0], [0, 0, 0, 1], [0, 0, 0, 1], [0, 1, 1, 1]]) """ img = np.atleast_3d(img) n_x, n_y, n_z = img.shape return _to_graph(n_x, n_y, n_z, mask, img, return_as, dtype)
Graph of the pixel-to-pixel gradient connections. Edges are weighted with the gradient values. Read more in the :ref:`User Guide <image_feature_extraction>`. Parameters ---------- img : array-like of shape (height, width) or (height, width, channel) 2D or 3D image. mask : ndarray of shape (height, width) or \ (height, width, channel), dtype=bool, default=None An optional mask of the image, to consider only part of the pixels. return_as : np.ndarray or a sparse matrix class, \ default=sparse.coo_matrix The class to use to build the returned adjacency matrix. dtype : dtype, default=None The data of the returned sparse matrix. By default it is the dtype of img. Returns ------- graph : ndarray or a sparse matrix class The computed adjacency matrix. Examples -------- >>> import numpy as np >>> from sklearn.feature_extraction.image import img_to_graph >>> img = np.array([[0, 0], [0, 1]]) >>> img_to_graph(img, return_as=np.ndarray) array([[0, 0, 0, 0], [0, 0, 0, 1], [0, 0, 0, 1], [0, 1, 1, 1]])
python
sklearn/feature_extraction/image.py
152
[ "img", "mask", "return_as", "dtype" ]
false
1
6.32
scikit-learn/scikit-learn
64,340
numpy
false
firstBatchSize
public Integer firstBatchSize() { if (buffer.remaining() < HEADER_SIZE_UP_TO_MAGIC) return null; return new ByteBufferLogInputStream(buffer, Integer.MAX_VALUE).nextBatchSize(); }
Validates the header of the first batch and returns batch size. @return first batch size including LOG_OVERHEAD if buffer contains header up to magic byte, null otherwise @throws CorruptRecordException if record size or magic is invalid
java
clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java
120
[]
Integer
true
2
7.76
apache/kafka
31,560
javadoc
false
parseObject
@Override public Object parseObject(final String source) throws ParseException { return parse(source); }
Parses a formatted date string according to the format. Updates the Calendar with parsed fields. Upon success, the ParsePosition index is updated to indicate how much of the source text was consumed. Not all source text needs to be consumed. Upon parse failure, ParsePosition error index is updated to the offset of the source text which does not match the supplied format. @param source The text to parse. @param pos On input, the position in the source to start parsing, on output, updated position. @param calendar The calendar into which to set parsed fields. @return true, if source has been parsed (pos parsePosition is updated); otherwise false (and pos errorIndex is updated) @throws IllegalArgumentException when Calendar has been set to be not lenient, and a parsed field is out of range.
java
src/main/java/org/apache/commons/lang3/time/FastDateParser.java
1,080
[ "source" ]
Object
true
1
6.8
apache/commons-lang
2,896
javadoc
false
getBean
protected <T> T getBean(String name, Class<T> serviceType) { if (this.beanFactory == null) { throw new IllegalStateException( "BeanFactory must be set on cache aspect for " + serviceType.getSimpleName() + " retrieval"); } return BeanFactoryAnnotationUtils.qualifiedBeanOfType(this.beanFactory, serviceType, name); }
Retrieve a bean with the specified name and type. Used to resolve services that are referenced by name in a {@link CacheOperation}. @param name the name of the bean, as defined by the cache operation @param serviceType the type expected by the operation's service reference @return the bean matching the expected type, qualified by the given name @throws org.springframework.beans.factory.NoSuchBeanDefinitionException if such bean does not exist @see CacheOperation#getKeyGenerator() @see CacheOperation#getCacheManager() @see CacheOperation#getCacheResolver()
java
spring-context/src/main/java/org/springframework/cache/interceptor/CacheAspectSupport.java
380
[ "name", "serviceType" ]
T
true
2
7.44
spring-projects/spring-framework
59,386
javadoc
false
visitThisKeyword
function visitThisKeyword(node: Node): Node { hierarchyFacts |= HierarchyFacts.LexicalThis; if (hierarchyFacts & HierarchyFacts.ArrowFunction && !(hierarchyFacts & HierarchyFacts.StaticInitializer)) { hierarchyFacts |= HierarchyFacts.CapturedLexicalThis; } if (convertedLoopState) { if (hierarchyFacts & HierarchyFacts.ArrowFunction) { // if the enclosing function is an ArrowFunction then we use the captured 'this' keyword. convertedLoopState.containsLexicalThis = true; return node; } return convertedLoopState.thisName || (convertedLoopState.thisName = factory.createUniqueName("this")); } return node; }
Restores the `HierarchyFacts` for this node's ancestor after visiting this node's subtree, propagating specific facts from the subtree. @param ancestorFacts The `HierarchyFacts` of the ancestor to restore after visiting the subtree. @param excludeFacts The existing `HierarchyFacts` of the subtree that should not be propagated. @param includeFacts The new `HierarchyFacts` of the subtree that should be propagated.
typescript
src/compiler/transformers/es2015.ts
854
[ "node" ]
true
6
6.4
microsoft/TypeScript
107,154
jsdoc
false
_entry_is_valid
def _entry_is_valid( cfg: dict[str, Any], template_id: str, template_hash_map: Optional[dict[str, Optional[str]]], ) -> bool: """ Check if a config entry is valid based on template hash validation. Args: cfg: Configuration dictionary that may contain a template_hash field template_id: The template identifier template_hash_map: Optional mapping from template_uid to src_hash for validation Returns: True if the config is valid and should be kept, False if it should be filtered out """ # If hash checking is disabled or no hash map provided, keep the config if not config.lookup_table.check_src_hash or not template_hash_map: return True template_hash = template_hash_map.get(template_id) config_hash = cfg.get("template_hash") # Both hashes present - validate they match if template_hash is not None and config_hash is not None: if config_hash != template_hash: log.warning( "Hash validation failed for template '%s': config_hash='%s' != template_hash='%s'. " "Template code may have changed. Filtering out config: %s", template_id, config_hash, template_hash, {k: v for k, v in cfg.items() if k != "template_hash"}, ) return False else: log.debug( "Hash validation passed for template '%s': hash='%s'", template_id, template_hash, ) return True # Config has no hash - keep it elif config_hash is None: log.debug( "Config for template '%s' has no hash - keeping it (template_hash='%s')", template_id, template_hash, ) return True # Template has no hash - keep config else: log.debug( "Template '%s' has no src_hash - keeping config with hash '%s'", template_id, config_hash, ) return True
Check if a config entry is valid based on template hash validation. Args: cfg: Configuration dictionary that may contain a template_hash field template_id: The template identifier template_hash_map: Optional mapping from template_uid to src_hash for validation Returns: True if the config is valid and should be kept, False if it should be filtered out
python
torch/_inductor/lookup_table/choices.py
147
[ "cfg", "template_id", "template_hash_map" ]
bool
true
9
7.68
pytorch/pytorch
96,034
google
false
indexOf
public static int indexOf(char[] array, char target) { return indexOf(array, target, 0, array.length); }
Returns the index of the first appearance of the value {@code target} in {@code array}. @param array an array of {@code char} values, possibly empty @param target a primitive {@code char} value @return the least index {@code i} for which {@code array[i] == target}, or {@code -1} if no such index exists.
java
android/guava/src/com/google/common/primitives/Chars.java
150
[ "array", "target" ]
true
1
6.48
google/guava
51,352
javadoc
false
getComputedFieldsFromModel
function getComputedFieldsFromModel( name: string | undefined, previousComputedFields: ComputedFieldsMap | undefined, modelResult: ResultArg | undefined, ): ComputedFieldsMap { if (!modelResult) { return {} } return mapObjectValues(modelResult, ({ needs, compute }, fieldName) => ({ name: fieldName, needs: needs ? Object.keys(needs).filter((key) => needs[key]) : [], compute: composeCompute(previousComputedFields, fieldName, compute), })) }
Given the list of previously resolved computed fields, new extension and dmmf model name, produces a map of all computed fields that may be applied to this model, accounting for all previous and past extensions. All naming conflicts which could be produced by the plain list of extensions are resolved as follows: - extension, that declared later always wins - in a single extension, specific model takes precedence over $allModels Additionally, resolves all `needs` dependencies down to the model fields. For example, if `nameAndTitle` field depends on `fullName` computed field and `title` model field and `fullName` field depends on `firstName` and `lastName` field, full list of `nameAndTitle` dependencies would be `firstName`, `lastName`, `title`. @param previousComputedFields @param extension @param dmmfModelName @returns
typescript
packages/client/src/runtime/core/extensions/resultUtils.ts
75
[ "name", "previousComputedFields", "modelResult" ]
true
3
7.76
prisma/prisma
44,834
jsdoc
false
addToken
private void addToken(final List<String> list, String tok) { if (StringUtils.isEmpty(tok)) { if (isIgnoreEmptyTokens()) { return; } if (isEmptyTokenAsNull()) { tok = null; } } list.add(tok); }
Adds a token to a list, paying attention to the parameters we've set. @param list the list to add to. @param tok the token to add.
java
src/main/java/org/apache/commons/lang3/text/StrTokenizer.java
415
[ "list", "tok" ]
void
true
4
7.04
apache/commons-lang
2,896
javadoc
false
get_bucket
def get_bucket(self, bucket_name: str | None = None) -> S3Bucket: """ Return a :py:class:`S3.Bucket` object. .. seealso:: - :external+boto3:py:meth:`S3.ServiceResource.Bucket` :param bucket_name: the name of the bucket :return: the bucket object to the bucket name. """ return self.resource.Bucket(bucket_name)
Return a :py:class:`S3.Bucket` object. .. seealso:: - :external+boto3:py:meth:`S3.ServiceResource.Bucket` :param bucket_name: the name of the bucket :return: the bucket object to the bucket name.
python
providers/amazon/src/airflow/providers/amazon/aws/hooks/s3.py
327
[ "self", "bucket_name" ]
S3Bucket
true
1
6.24
apache/airflow
43,597
sphinx
false
createBean
<T> T createBean(Class<T> beanClass) throws BeansException;
Fully create a new bean instance of the given class. <p>Performs full initialization of the bean, including all applicable {@link BeanPostProcessor BeanPostProcessors}. <p>Note: This is intended for creating a fresh instance, populating annotated fields and methods as well as applying all standard bean initialization callbacks. Constructor resolution is based on Kotlin primary / single public / single non-public, with a fallback to the default constructor in ambiguous scenarios, also influenced by {@link SmartInstantiationAwareBeanPostProcessor#determineCandidateConstructors} (for example, for annotation-driven constructor selection). @param beanClass the class of the bean to create @return the new bean instance @throws BeansException if instantiation or wiring failed
java
spring-beans/src/main/java/org/springframework/beans/factory/config/AutowireCapableBeanFactory.java
137
[ "beanClass" ]
T
true
1
6
spring-projects/spring-framework
59,386
javadoc
false
fit
def fit(self, X, y=None): """Fit the imputer on `X`. Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) Input data, where `n_samples` is the number of samples and `n_features` is the number of features. y : Ignored Not used, present here for API consistency by convention. Returns ------- self : object Fitted estimator. """ X = self._validate_input(X, in_fit=True) # default fill_value is 0 for numerical input and "missing_value" # otherwise if self.fill_value is None: if X.dtype.kind in ("i", "u", "f"): fill_value = 0 else: fill_value = "missing_value" else: fill_value = self.fill_value self._fill_dtype = X.dtype if sp.issparse(X): self.statistics_ = self._sparse_fit( X, self.strategy, self.missing_values, fill_value ) else: self.statistics_ = self._dense_fit( X, self.strategy, self.missing_values, fill_value ) return self
Fit the imputer on `X`. Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) Input data, where `n_samples` is the number of samples and `n_features` is the number of features. y : Ignored Not used, present here for API consistency by convention. Returns ------- self : object Fitted estimator.
python
sklearn/impute/_base.py
430
[ "self", "X", "y" ]
false
7
6.08
scikit-learn/scikit-learn
64,340
numpy
false
pathJoin
function pathJoin(paths: string[]): string { return paths .flatMap((p) => p.split('/')) .filter(Boolean) .join('/'); }
Combines path parts together, without duplicating separators (slashes). Used instead of `path.join` because this code runs in the browser. @param paths Array of paths to join together. @returns Joined path string, with single '/' between parts
typescript
code/core/src/preview-api/modules/store/autoTitle.ts
42
[ "paths" ]
true
1
7.04
storybookjs/storybook
88,865
jsdoc
false
get_console_size
def get_console_size() -> tuple[int | None, int | None]: """ Return console size as tuple = (width, height). Returns (None,None) in non-interactive session. """ from pandas import get_option display_width = get_option("display.width") display_height = get_option("display.max_rows") # Consider # interactive shell terminal, can detect term size # interactive non-shell terminal (ipnb/ipqtconsole), cannot detect term # size non-interactive script, should disregard term size # in addition # width,height have default values, but setting to 'None' signals # should use Auto-Detection, But only in interactive shell-terminal. # Simple. yeah. if in_interactive_session(): if in_ipython_frontend(): # sane defaults for interactive non-shell terminal # match default for width,height in config_init from pandas._config.config import get_default_val terminal_width = get_default_val("display.width") terminal_height = get_default_val("display.max_rows") else: # pure terminal terminal_width, terminal_height = get_terminal_size() else: terminal_width, terminal_height = None, None # Note if the User sets width/Height to None (auto-detection) # and we're in a script (non-inter), this will return (None,None) # caller needs to deal. return display_width or terminal_width, display_height or terminal_height
Return console size as tuple = (width, height). Returns (None,None) in non-interactive session.
python
pandas/io/formats/console.py
10
[]
tuple[int | None, int | None]
true
7
6.72
pandas-dev/pandas
47,362
unknown
false
castCap
function castCap(name, func) { if (config.cap) { var indexes = mapping.iterateeRearg[name]; if (indexes) { return iterateeRearg(func, indexes); } var n = !isLib && mapping.iterateeAry[name]; if (n) { return iterateeAry(func, n); } } return func; }
Casts `func` to a function with an arity capped iteratee if needed. @private @param {string} name The name of the function to inspect. @param {Function} func The function to inspect. @returns {Function} Returns the cast function.
javascript
fp/_baseConvert.js
277
[ "name", "func" ]
false
5
6.24
lodash/lodash
61,490
jsdoc
false
_match_levels
def _match_levels( tensor: torch.Tensor, from_levels: list[DimEntry], to_levels: list[DimEntry], drop_levels: bool = False, ) -> torch.Tensor: """ Reshape a tensor to match target levels using as_strided. Args: tensor: Input tensor to reshape from_levels: Current levels of the tensor to_levels: Target levels to match drop_levels: If True, missing dimensions are assumed to have stride 0 Returns: Reshaped tensor """ if from_levels == to_levels: return tensor sizes = tensor.size() strides = tensor.stride() if not drop_levels: assert len(from_levels) <= len(to_levels), ( "Cannot expand dimensions without drop_levels" ) new_sizes = [] new_strides = [] for level in to_levels: # Find index of this level in from_levels try: idx = from_levels.index(level) except ValueError: # Level not found in from_levels if level.is_positional(): new_sizes.append(1) else: new_sizes.append(level.dim().size) new_strides.append(0) else: new_sizes.append(sizes[idx]) new_strides.append(strides[idx]) return tensor.as_strided(new_sizes, new_strides, tensor.storage_offset())
Reshape a tensor to match target levels using as_strided. Args: tensor: Input tensor to reshape from_levels: Current levels of the tensor to_levels: Target levels to match drop_levels: If True, missing dimensions are assumed to have stride 0 Returns: Reshaped tensor
python
functorch/dim/_dim_entry.py
80
[ "tensor", "from_levels", "to_levels", "drop_levels" ]
torch.Tensor
true
7
7.92
pytorch/pytorch
96,034
google
false
trim_zeros
def trim_zeros(filt, trim='fb', axis=None): """Remove values along a dimension which are zero along all other. Parameters ---------- filt : array_like Input array. trim : {"fb", "f", "b"}, optional A string with 'f' representing trim from front and 'b' to trim from back. By default, zeros are trimmed on both sides. Front and back refer to the edges of a dimension, with "front" referring to the side with the lowest index 0, and "back" referring to the highest index (or index -1). axis : int or sequence, optional If None, `filt` is cropped such that the smallest bounding box is returned that still contains all values which are not zero. If an axis is specified, `filt` will be sliced in that dimension only on the sides specified by `trim`. The remaining area will be the smallest that still contains all values wich are not zero. .. versionadded:: 2.2.0 Returns ------- trimmed : ndarray or sequence The result of trimming the input. The number of dimensions and the input data type are preserved. Notes ----- For all-zero arrays, the first axis is trimmed first. Examples -------- >>> import numpy as np >>> a = np.array((0, 0, 0, 1, 2, 3, 0, 2, 1, 0)) >>> np.trim_zeros(a) array([1, 2, 3, 0, 2, 1]) >>> np.trim_zeros(a, trim='b') array([0, 0, 0, ..., 0, 2, 1]) Multiple dimensions are supported. >>> b = np.array([[0, 0, 2, 3, 0, 0], ... [0, 1, 0, 3, 0, 0], ... [0, 0, 0, 0, 0, 0]]) >>> np.trim_zeros(b) array([[0, 2, 3], [1, 0, 3]]) >>> np.trim_zeros(b, axis=-1) array([[0, 2, 3], [1, 0, 3], [0, 0, 0]]) The input data type is preserved, list/tuple in means list/tuple out. >>> np.trim_zeros([0, 1, 2, 0]) [1, 2] """ filt_ = np.asarray(filt) trim = trim.lower() if trim not in {"fb", "bf", "f", "b"}: raise ValueError(f"unexpected character(s) in `trim`: {trim!r}") if axis is None: axis_tuple = tuple(range(filt_.ndim)) else: axis_tuple = _nx.normalize_axis_tuple(axis, filt_.ndim, argname="axis") if not axis_tuple: # No trimming requested -> return input unmodified. return filt start, stop = _arg_trim_zeros(filt_) stop += 1 # Adjust for slicing if start.size == 0: # filt is all-zero -> assign same values to start and stop so that # resulting slice will be empty start = stop = np.zeros(filt_.ndim, dtype=np.intp) else: if 'f' not in trim: start = (None,) * filt_.ndim if 'b' not in trim: stop = (None,) * filt_.ndim sl = tuple(slice(start[ax], stop[ax]) if ax in axis_tuple else slice(None) for ax in range(filt_.ndim)) if len(sl) == 1: # filt is 1D -> avoid multi-dimensional slicing to preserve # non-array input types return filt[sl[0]] return filt[sl]
Remove values along a dimension which are zero along all other. Parameters ---------- filt : array_like Input array. trim : {"fb", "f", "b"}, optional A string with 'f' representing trim from front and 'b' to trim from back. By default, zeros are trimmed on both sides. Front and back refer to the edges of a dimension, with "front" referring to the side with the lowest index 0, and "back" referring to the highest index (or index -1). axis : int or sequence, optional If None, `filt` is cropped such that the smallest bounding box is returned that still contains all values which are not zero. If an axis is specified, `filt` will be sliced in that dimension only on the sides specified by `trim`. The remaining area will be the smallest that still contains all values wich are not zero. .. versionadded:: 2.2.0 Returns ------- trimmed : ndarray or sequence The result of trimming the input. The number of dimensions and the input data type are preserved. Notes ----- For all-zero arrays, the first axis is trimmed first. Examples -------- >>> import numpy as np >>> a = np.array((0, 0, 0, 1, 2, 3, 0, 2, 1, 0)) >>> np.trim_zeros(a) array([1, 2, 3, 0, 2, 1]) >>> np.trim_zeros(a, trim='b') array([0, 0, 0, ..., 0, 2, 1]) Multiple dimensions are supported. >>> b = np.array([[0, 0, 2, 3, 0, 0], ... [0, 1, 0, 3, 0, 0], ... [0, 0, 0, 0, 0, 0]]) >>> np.trim_zeros(b) array([[0, 2, 3], [1, 0, 3]]) >>> np.trim_zeros(b, axis=-1) array([[0, 2, 3], [1, 0, 3], [0, 0, 0]]) The input data type is preserved, list/tuple in means list/tuple out. >>> np.trim_zeros([0, 1, 2, 0]) [1, 2]
python
numpy/lib/_function_base_impl.py
1,941
[ "filt", "trim", "axis" ]
false
11
7.76
numpy/numpy
31,054
numpy
false
_get_series_list
def _get_series_list(self, others): """ Auxiliary function for :meth:`str.cat`. Turn potentially mixed input into a list of Series (elements without an index must match the length of the calling Series/Index). Parameters ---------- others : Series, DataFrame, np.ndarray, list-like or list-like of Objects that are either Series, Index or np.ndarray (1-dim). Returns ------- list of Series Others transformed into list of Series. """ from pandas import ( DataFrame, Series, ) # self._orig is either Series or Index idx = self._orig if isinstance(self._orig, ABCIndex) else self._orig.index # Generally speaking, all objects without an index inherit the index # `idx` of the calling Series/Index - i.e. must have matching length. # Objects with an index (i.e. Series/Index/DataFrame) keep their own. if isinstance(others, ABCSeries): return [others] elif isinstance(others, ABCIndex): return [Series(others, index=idx, dtype=others.dtype)] elif isinstance(others, ABCDataFrame): return [others[x] for x in others] elif isinstance(others, np.ndarray) and others.ndim == 2: others = DataFrame(others, index=idx) return [others[x] for x in others] elif is_list_like(others, allow_sets=False): try: others = list(others) # ensure iterators do not get read twice etc except TypeError: # e.g. ser.str, raise below pass else: # in case of list-like `others`, all elements must be # either Series/Index/np.ndarray (1-dim)... if all( isinstance(x, (ABCSeries, ABCIndex, ExtensionArray)) or (isinstance(x, np.ndarray) and x.ndim == 1) for x in others ): los: list[Series] = [] while others: # iterate through list and append each element los = los + self._get_series_list(others.pop(0)) return los # ... or just strings elif all(not is_list_like(x) for x in others): return [Series(others, index=idx)] raise TypeError( "others must be Series, Index, DataFrame, np.ndarray " "or list-like (either containing only strings or " "containing only objects of type Series/Index/" "np.ndarray[1-dim])" )
Auxiliary function for :meth:`str.cat`. Turn potentially mixed input into a list of Series (elements without an index must match the length of the calling Series/Index). Parameters ---------- others : Series, DataFrame, np.ndarray, list-like or list-like of Objects that are either Series, Index or np.ndarray (1-dim). Returns ------- list of Series Others transformed into list of Series.
python
pandas/core/strings/accessor.py
418
[ "self", "others" ]
false
14
6
pandas-dev/pandas
47,362
numpy
false
loadBeanDefinitions
public int loadBeanDefinitions(EncodedResource encodedResource) throws BeanDefinitionStoreException { return loadBeanDefinitions(encodedResource, null); }
Load bean definitions from the specified properties file. @param encodedResource the resource descriptor for the properties file, allowing to specify an encoding to use for parsing the file @return the number of bean definitions found @throws BeanDefinitionStoreException in case of loading or parsing errors
java
spring-beans/src/main/java/org/springframework/beans/factory/support/PropertiesBeanDefinitionReader.java
237
[ "encodedResource" ]
true
1
6
spring-projects/spring-framework
59,386
javadoc
false
addInitializer
public void addInitializer(final String name, final BackgroundInitializer<?> backgroundInitializer) { Objects.requireNonNull(name, "name"); Objects.requireNonNull(backgroundInitializer, "backgroundInitializer"); synchronized (this) { if (isStarted()) { throw new IllegalStateException("addInitializer() must not be called after start()!"); } childInitializers.put(name, backgroundInitializer); } }
Adds a new {@link BackgroundInitializer} to this object. When this {@link MultiBackgroundInitializer} is started, the given initializer will be processed. This method must not be called after {@link #start()} has been invoked. @param name the name of the initializer (must not be <strong>null</strong>). @param backgroundInitializer the {@link BackgroundInitializer} to add (must not be <strong>null</strong>). @throws NullPointerException if either {@code name} or {@code backgroundInitializer} is {@code null}. @throws IllegalStateException if {@code start()} has already been called.
java
src/main/java/org/apache/commons/lang3/concurrent/MultiBackgroundInitializer.java
252
[ "name", "backgroundInitializer" ]
void
true
2
6.4
apache/commons-lang
2,896
javadoc
false
to_iceberg
def to_iceberg( df: DataFrame, table_identifier: str, catalog_name: str | None = None, *, catalog_properties: dict[str, Any] | None = None, location: str | None = None, append: bool = False, snapshot_properties: dict[str, str] | None = None, ) -> None: """ Write a DataFrame to an Apache Iceberg table. .. versionadded:: 3.0.0 Parameters ---------- table_identifier : str Table identifier. catalog_name : str, optional The name of the catalog. catalog_properties : dict of {str: str}, optional The properties that are used next to the catalog configuration. location : str, optional Location for the table. append : bool, default False If ``True``, append data to the table, instead of replacing the content. snapshot_properties : dict of {str: str}, optional Custom properties to be added to the snapshot summary See Also -------- read_iceberg : Read an Apache Iceberg table. DataFrame.to_parquet : Write a DataFrame in Parquet format. """ pa = import_optional_dependency("pyarrow") pyiceberg_catalog = import_optional_dependency("pyiceberg.catalog") if catalog_properties is None: catalog_properties = {} catalog = pyiceberg_catalog.load_catalog(catalog_name, **catalog_properties) arrow_table = pa.Table.from_pandas(df) table = catalog.create_table_if_not_exists( identifier=table_identifier, schema=arrow_table.schema, location=location, # we could add `partition_spec`, `sort_order` and `properties` in the # future, but it may not be trivial without exposing PyIceberg objects ) if snapshot_properties is None: snapshot_properties = {} if append: table.append(arrow_table, snapshot_properties=snapshot_properties) else: table.overwrite(arrow_table, snapshot_properties=snapshot_properties)
Write a DataFrame to an Apache Iceberg table. .. versionadded:: 3.0.0 Parameters ---------- table_identifier : str Table identifier. catalog_name : str, optional The name of the catalog. catalog_properties : dict of {str: str}, optional The properties that are used next to the catalog configuration. location : str, optional Location for the table. append : bool, default False If ``True``, append data to the table, instead of replacing the content. snapshot_properties : dict of {str: str}, optional Custom properties to be added to the snapshot summary See Also -------- read_iceberg : Read an Apache Iceberg table. DataFrame.to_parquet : Write a DataFrame in Parquet format.
python
pandas/io/iceberg.py
100
[ "df", "table_identifier", "catalog_name", "catalog_properties", "location", "append", "snapshot_properties" ]
None
true
5
6.64
pandas-dev/pandas
47,362
numpy
false
get
public Object get(int index) throws JSONException { try { Object value = this.values.get(index); if (value == null) { throw new JSONException("Value at " + index + " is null."); } return value; } catch (IndexOutOfBoundsException e) { throw new JSONException("Index " + index + " out of range [0.." + this.values.size() + ")"); } }
Returns the value at {@code index}. @param index the index to get the value from @return the value at {@code index}. @throws JSONException if this array has no value at {@code index}, or if that value is the {@code null} reference. This method returns normally if the value is {@code JSONObject#NULL}.
java
cli/spring-boot-cli/src/json-shade/java/org/springframework/boot/cli/json/JSONArray.java
279
[ "index" ]
Object
true
3
8.24
spring-projects/spring-boot
79,428
javadoc
false
toString
@Override public String toString() { return "ItemIgnore{" + "type=" + this.type + ", name='" + this.name + '\'' + '}'; }
Create an ignore for a property with the given name. @param name the name @return the item ignore
java
configuration-metadata/spring-boot-configuration-processor/src/main/java/org/springframework/boot/configurationprocessor/metadata/ItemIgnore.java
82
[]
String
true
1
6.96
spring-projects/spring-boot
79,428
javadoc
false
fchownSync
function fchownSync(fd, uid, gid) { validateInteger(uid, 'uid', -1, kMaxUserId); validateInteger(gid, 'gid', -1, kMaxUserId); if (permission.isEnabled()) { throw new ERR_ACCESS_DENIED('fchown API is disabled when Permission Model is enabled.'); } binding.fchown(fd, uid, gid); }
Synchronously sets the owner of the file. @param {number} fd @param {number} uid @param {number} gid @returns {void}
javascript
lib/fs.js
2,093
[ "fd", "uid", "gid" ]
false
2
6.08
nodejs/node
114,839
jsdoc
false
binaryToHexDigit
public static char binaryToHexDigit(final boolean[] src) { return binaryToHexDigit(src, 0); }
Converts binary (represented as boolean array) to a hexadecimal digit using the default (LSB0) bit ordering. <p> (1, 0, 0, 0) is converted as follow: '1'. </p> @param src the binary to convert. @return a hexadecimal digit representing the selected bits. @throws IllegalArgumentException if {@code src} is empty. @throws NullPointerException if {@code src} is {@code null}.
java
src/main/java/org/apache/commons/lang3/Conversion.java
184
[ "src" ]
true
1
6.64
apache/commons-lang
2,896
javadoc
false
transformModuleBody
function transformModuleBody(node: ModuleDeclaration, namespaceLocalName: Identifier): Block { const savedCurrentNamespaceContainerName = currentNamespaceContainerName; const savedCurrentNamespace = currentNamespace; const savedCurrentScopeFirstDeclarationsOfName = currentScopeFirstDeclarationsOfName; currentNamespaceContainerName = namespaceLocalName; currentNamespace = node; currentScopeFirstDeclarationsOfName = undefined; const statements: Statement[] = []; startLexicalEnvironment(); let statementsLocation: TextRange | undefined; let blockLocation: TextRange | undefined; if (node.body) { if (node.body.kind === SyntaxKind.ModuleBlock) { saveStateAndInvoke(node.body, body => addRange(statements, visitNodes(body.statements, namespaceElementVisitor, isStatement))); statementsLocation = node.body.statements; blockLocation = node.body; } else { const result = visitModuleDeclaration(node.body as ModuleDeclaration); if (result) { if (isArray(result)) { addRange(statements, result); } else { statements.push(result); } } const moduleBlock = getInnerMostModuleDeclarationFromDottedModule(node)!.body as ModuleBlock; statementsLocation = moveRangePos(moduleBlock.statements, -1); } } insertStatementsAfterStandardPrologue(statements, endLexicalEnvironment()); currentNamespaceContainerName = savedCurrentNamespaceContainerName; currentNamespace = savedCurrentNamespace; currentScopeFirstDeclarationsOfName = savedCurrentScopeFirstDeclarationsOfName; const block = factory.createBlock( setTextRange( factory.createNodeArray(statements), /*location*/ statementsLocation, ), /*multiLine*/ true, ); setTextRange(block, blockLocation); // namespace hello.hi.world { // function foo() {} // // // TODO, blah // } // // should be emitted as // // var hello; // (function (hello) { // var hi; // (function (hi) { // var world; // (function (world) { // function foo() { } // // TODO, blah // })(world = hi.world || (hi.world = {})); // })(hi = hello.hi || (hello.hi = {})); // })(hello || (hello = {})); // We only want to emit comment on the namespace which contains block body itself, not the containing namespaces. if (!node.body || node.body.kind !== SyntaxKind.ModuleBlock) { setEmitFlags(block, getEmitFlags(block) | EmitFlags.NoComments); } return block; }
Transforms the body of a module declaration. @param node The module declaration node.
typescript
src/compiler/transformers/ts.ts
2,168
[ "node", "namespaceLocalName" ]
true
9
6.64
microsoft/TypeScript
107,154
jsdoc
false
assignedPartitionsList
public synchronized List<TopicPartition> assignedPartitionsList() { return new ArrayList<>(this.assignment.partitionSet()); }
@return a modifiable copy of the currently assigned partitions as a list
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
473
[]
true
1
6.16
apache/kafka
31,560
javadoc
false
iterator
@Override public Iterator<ObjectError> iterator() { return this.errors.iterator(); }
Return the list of all validation errors. @return the errors
java
core/spring-boot/src/main/java/org/springframework/boot/context/properties/bind/validation/ValidationErrors.java
131
[]
true
1
6.8
spring-projects/spring-boot
79,428
javadoc
false
extractTargetClassFromFactoryBean
private Class<?> extractTargetClassFromFactoryBean(Class<?> factoryBeanType, ResolvableType beanType) { ResolvableType target = ResolvableType.forType(factoryBeanType).as(FactoryBean.class).getGeneric(0); if (target.getType().equals(Class.class)) { return target.toClass(); } else if (factoryBeanType.isAssignableFrom(beanType.toClass())) { return beanType.as(FactoryBean.class).getGeneric(0).toClass(); } return beanType.toClass(); }
Extract the target class of a public {@link FactoryBean} based on its constructor. If the implementation does not resolve the target class because it itself uses a generic, attempt to extract it from the bean type. @param factoryBeanType the factory bean type @param beanType the bean type @return the target class to use
java
spring-beans/src/main/java/org/springframework/beans/factory/aot/DefaultBeanRegistrationCodeFragments.java
110
[ "factoryBeanType", "beanType" ]
true
3
7.76
spring-projects/spring-framework
59,386
javadoc
false
hasProgrammaticallySetProfiles
private boolean hasProgrammaticallySetProfiles(Type type, @Nullable String environmentPropertyValue, Set<String> environmentPropertyProfiles, Set<String> environmentProfiles) { if (!StringUtils.hasLength(environmentPropertyValue)) { return !type.getDefaultValue().equals(environmentProfiles); } if (type.getDefaultValue().equals(environmentProfiles)) { return false; } return !environmentPropertyProfiles.equals(environmentProfiles); }
Create a new {@link Profiles} instance based on the {@link Environment} and {@link Binder}. @param environment the source environment @param binder the binder for profile properties @param additionalProfiles any additional active profiles
java
core/spring-boot/src/main/java/org/springframework/boot/context/config/Profiles.java
122
[ "type", "environmentPropertyValue", "environmentPropertyProfiles", "environmentProfiles" ]
true
3
6.08
spring-projects/spring-boot
79,428
javadoc
false
generateCodeForInaccessibleFactoryMethod
private CodeBlock generateCodeForInaccessibleFactoryMethod( String beanName, Method factoryMethod, Class<?> targetClass) { this.generationContext.getRuntimeHints().reflection().registerMethod(factoryMethod, ExecutableMode.INVOKE); GeneratedMethod getInstanceMethod = generateGetInstanceSupplierMethod(method -> { CodeWarnings codeWarnings = new CodeWarnings(); Class<?> suppliedType = ClassUtils.resolvePrimitiveIfNecessary(factoryMethod.getReturnType()); codeWarnings.detectDeprecation(suppliedType, factoryMethod); method.addJavadoc("Get the bean instance supplier for '$L'.", beanName); method.addModifiers(PRIVATE_STATIC); codeWarnings.suppress(method); method.returns(ParameterizedTypeName.get(BeanInstanceSupplier.class, suppliedType)); method.addStatement(generateInstanceSupplierForFactoryMethod( factoryMethod, suppliedType, targetClass, factoryMethod.getName())); }); return generateReturnStatement(getInstanceMethod); }
Generate the instance supplier code. @param registeredBean the bean to handle @param instantiationDescriptor the executable to use to create the bean @return the generated code @since 6.1.7
java
spring-beans/src/main/java/org/springframework/beans/factory/aot/InstanceSupplierCodeGenerator.java
293
[ "beanName", "factoryMethod", "targetClass" ]
CodeBlock
true
1
6.24
spring-projects/spring-framework
59,386
javadoc
false
initialize
@SuppressWarnings("unchecked") protected T initialize() throws E { try { return initializer.get(); } catch (final Exception e) { // Do this first so we don't pass a RuntimeException or Error into an exception constructor ExceptionUtils.throwUnchecked(e); // Depending on the subclass of AbstractConcurrentInitializer E can be Exception or ConcurrentException // if E is Exception the if statement below will always be true, and the new Exception object created // in getTypedException will never be thrown. If E is ConcurrentException and the if statement is false // we throw the ConcurrentException returned from getTypedException, which wraps the original exception. final E typedException = getTypedException(e); if (typedException.getClass().isAssignableFrom(e.getClass())) { throw (E) e; } throw typedException; } }
Creates and initializes the object managed by this {@code ConcurrentInitializer}. This method is called by {@link #get()} when the object is accessed for the first time. An implementation can focus on the creation of the object. No synchronization is needed, as this is already handled by {@code get()}. <p> Subclasses and clients that do not provide an initializer are expected to implement this method. </p> @return the managed data object. @throws E if an error occurs during object creation.
java
src/main/java/org/apache/commons/lang3/concurrent/AbstractConcurrentInitializer.java
176
[]
T
true
3
8.08
apache/commons-lang
2,896
javadoc
false
validate
public ActionRequestValidationException validate() { ActionRequestValidationException err = new ActionRequestValidationException(); // how do we cross the id validation divide here? or do we? it seems unfortunate to not invoke it at all. // name validation if (Strings.hasText(name) == false) { err.addValidationError("invalid name [" + name + "]: cannot be empty"); } // provider-specific name validation if (provider instanceof Maxmind) { if (MAXMIND_NAMES.contains(name) == false) { err.addValidationError("invalid name [" + name + "]: must be a supported name ([" + MAXMIND_NAMES + "])"); } } if (provider instanceof Ipinfo) { if (IPINFO_NAMES.contains(name) == false) { err.addValidationError("invalid name [" + name + "]: must be a supported name ([" + IPINFO_NAMES + "])"); } } // important: the name must be unique across all configurations of this same type, // but we validate that in the cluster state update, not here. try { validateId(id); } catch (IllegalArgumentException e) { err.addValidationError(e.getMessage()); } return err.validationErrors().isEmpty() ? null : err; }
An id is intended to be alphanumerics, dashes, and underscores (only), but we're reserving leading dashes and underscores for ourselves in the future, that is, they're not for the ones that users can PUT.
java
modules/ingest-geoip/src/main/java/org/elasticsearch/ingest/geoip/direct/DatabaseConfiguration.java
190
[]
ActionRequestValidationException
true
8
7.04
elastic/elasticsearch
75,680
javadoc
false
get_conn_value
def get_conn_value(self, conn_id: str) -> str | None: """ Get serialized representation of Connection. :param conn_id: connection id """ if self.connections_prefix is None: return None secret = self._get_secret(self.connections_prefix, conn_id, self.connections_lookup_pattern) if secret is not None and secret.strip().startswith("{"): # Before Airflow 2.3, the AWS SecretsManagerBackend added support for JSON secrets. # # The way this was implemented differs a little from how Airflow's core API handle JSON secrets. # # The most notable difference is that SecretsManagerBackend supports extra aliases for the # Connection parts, e.g. "users" is allowed instead of "login". # # This means we need to deserialize then re-serialize the secret if it's a JSON, potentially # renaming some keys in the process. secret_dict = json.loads(secret) standardized_secret_dict = self._standardize_secret_keys(secret_dict) standardized_secret = json.dumps(standardized_secret_dict) return standardized_secret return secret
Get serialized representation of Connection. :param conn_id: connection id
python
providers/amazon/src/airflow/providers/amazon/aws/secrets/secrets_manager.py
200
[ "self", "conn_id" ]
str | None
true
4
6.56
apache/airflow
43,597
sphinx
false
createCJSModuleWrap
function createCJSModuleWrap(url, translateContext, parentURL, loadCJS = loadCJSModule) { debug(`Translating CJSModule ${url}`, translateContext); const { format: sourceFormat } = translateContext; let { source } = translateContext; const isMain = (parentURL === undefined); const filename = urlToFilename(url); // In case the source was not provided by the `load` step, we need fetch it now. source = stringify(source ?? getSource(new URL(url)).source); const { exportNames, module } = cjsPreparseModuleExports(filename, source, sourceFormat); cjsCache.set(url, module); const wrapperNames = [...exportNames]; if (!exportNames.has('default')) { ArrayPrototypePush(wrapperNames, 'default'); } if (!exportNames.has('module.exports')) { ArrayPrototypePush(wrapperNames, 'module.exports'); } if (isMain) { setOwnProperty(process, 'mainModule', module); } return new ModuleWrap(url, undefined, wrapperNames, function() { debug(`Loading CJSModule ${url}`); if (!module.loaded) { loadCJS(module, source, url, filename, !!isMain); } let exports; if (module[kModuleExport] !== undefined) { exports = module[kModuleExport]; module[kModuleExport] = undefined; } else { ({ exports } = module); } for (const exportName of exportNames) { if (exportName === 'default' || exportName === 'module.exports' || !ObjectPrototypeHasOwnProperty(exports, exportName)) { continue; } // We might trigger a getter -> dont fail. let value; try { value = exports[exportName]; } catch { // Continue regardless of error. } this.setExport(exportName, value); } this.setExport('default', exports); this.setExport('module.exports', exports); }, module); }
Creates a ModuleWrap object for a CommonJS module. @param {string} url - The URL of the module. @param {{ format: ModuleFormat, source: ModuleSource }} translateContext Context for the translator @param {string|undefined} parentURL URL of the module initiating the module loading for the first time. Undefined if it's the entry point. @param {typeof loadCJSModule} [loadCJS] - The function to load the CommonJS module. @returns {ModuleWrap} The ModuleWrap object for the CommonJS module.
javascript
lib/internal/modules/esm/translators.js
217
[ "url", "translateContext", "parentURL" ]
false
11
6.08
nodejs/node
114,839
jsdoc
false
join
function join(array, separator) { return array == null ? '' : nativeJoin.call(array, separator); }
Converts all elements in `array` into a string separated by `separator`. @static @memberOf _ @since 4.0.0 @category Array @param {Array} array The array to convert. @param {string} [separator=','] The element separator. @returns {string} Returns the joined string. @example _.join(['a', 'b', 'c'], '~'); // => 'a~b~c'
javascript
lodash.js
7,694
[ "array", "separator" ]
false
2
7.12
lodash/lodash
61,490
jsdoc
false
getIpinfoLookup
@Nullable static Function<Set<Database.Property>, IpDataLookup> getIpinfoLookup(final Database database) { return switch (database) { case AsnV2 -> IpinfoIpDataLookups.Asn::new; case CountryV2 -> IpinfoIpDataLookups.Country::new; case CityV2 -> IpinfoIpDataLookups.Geolocation::new; case PrivacyDetection -> IpinfoIpDataLookups.PrivacyDetection::new; default -> null; }; }
Cleans up the database_type String from an ipinfo database by splitting on punctuation, removing stop words, and then joining with an underscore. <p> e.g. "ipinfo free_foo_sample.mmdb" -> "foo" @param type the database_type from an ipinfo database @return a cleaned up database_type string
java
modules/ingest-geoip/src/main/java/org/elasticsearch/ingest/geoip/IpinfoIpDataLookups.java
114
[ "database" ]
true
1
7.04
elastic/elasticsearch
75,680
javadoc
false
inRange
function inRange(number, start, end) { start = toFinite(start); if (end === undefined) { end = start; start = 0; } else { end = toFinite(end); } number = toNumber(number); return baseInRange(number, start, end); }
Checks if `n` is between `start` and up to, but not including, `end`. If `end` is not specified, it's set to `start` with `start` then set to `0`. If `start` is greater than `end` the params are swapped to support negative ranges. @static @memberOf _ @since 3.3.0 @category Number @param {number} number The number to check. @param {number} [start=0] The start of the range. @param {number} end The end of the range. @returns {boolean} Returns `true` if `number` is in the range, else `false`. @see _.range, _.rangeRight @example _.inRange(3, 2, 4); // => true _.inRange(4, 8); // => true _.inRange(4, 2); // => false _.inRange(2, 2); // => false _.inRange(1.2, 2); // => true _.inRange(5.2, 4); // => false _.inRange(-3, -2, -6); // => true
javascript
lodash.js
14,142
[ "number", "start", "end" ]
false
3
7.52
lodash/lodash
61,490
jsdoc
false
update_range
def update_range(self, start: int, end: int, value: T) -> None: """ Update a range of values in the segment tree. Args: start: Start index of the range to update (inclusive) end: End index of the range to update (inclusive) value: Value to apply to the range Raises: ValueError: If start > end or indices are out of bounds """ if start > end: raise ValueError("Start index must be less than or equal to end index") if start < 0 or start >= self.n: raise ValueError(f"Start index {start} out of bounds [0, {self.n - 1}]") if end < 0 or end >= self.n: raise ValueError(f"End index {end} out of bounds [0, {self.n - 1}]") self._update_range_helper(1, 0, self.n - 1, start, end, value)
Update a range of values in the segment tree. Args: start: Start index of the range to update (inclusive) end: End index of the range to update (inclusive) value: Value to apply to the range Raises: ValueError: If start > end or indices are out of bounds
python
torch/_inductor/codegen/segmented_tree.py
196
[ "self", "start", "end", "value" ]
None
true
6
6.72
pytorch/pytorch
96,034
google
false
add
public boolean add(CompoundStat stat) { return add(stat, null); }
Register a compound statistic with this sensor with no config override @param stat The stat to register @return true if stat is added to sensor, false if sensor is expired
java
clients/src/main/java/org/apache/kafka/common/metrics/Sensor.java
279
[ "stat" ]
true
1
6.32
apache/kafka
31,560
javadoc
false
buildIndexedPropertyName
private @Nullable String buildIndexedPropertyName(@Nullable String propertyName, int index) { return (propertyName != null ? propertyName + PropertyAccessor.PROPERTY_KEY_PREFIX + index + PropertyAccessor.PROPERTY_KEY_SUFFIX : null); }
Convert the given text value using the given property editor. @param oldValue the previous value, if available (may be {@code null}) @param newTextValue the proposed text value @param editor the PropertyEditor to use @return the converted value
java
spring-beans/src/main/java/org/springframework/beans/TypeConverterDelegate.java
626
[ "propertyName", "index" ]
String
true
2
7.6
spring-projects/spring-framework
59,386
javadoc
false
zfill
def zfill(self, width: int): """ Pad strings in the Series/Index by prepending '0' characters. Strings in the Series/Index are padded with '0' characters on the left of the string to reach a total string length `width`. Strings in the Series/Index with length greater or equal to `width` are unchanged. Parameters ---------- width : int Minimum length of resulting string; strings with length less than `width` be prepended with '0' characters. Returns ------- Series/Index of objects. A Series or Index where the strings are prepended with '0' characters. See Also -------- Series.str.rjust : Fills the left side of strings with an arbitrary character. Series.str.ljust : Fills the right side of strings with an arbitrary character. Series.str.pad : Fills the specified sides of strings with an arbitrary character. Series.str.center : Fills both sides of strings with an arbitrary character. Notes ----- Differs from :meth:`str.zfill` which has special handling for '+'/'-' in the string. Examples -------- >>> s = pd.Series(["-1", "1", "1000", 10, np.nan]) >>> s 0 -1 1 1 2 1000 3 10 4 NaN dtype: object Note that ``10`` and ``NaN`` are not strings, therefore they are converted to ``NaN``. The minus sign in ``'-1'`` is treated as a special character and the zero is added to the right of it (:meth:`str.zfill` would have moved it to the left). ``1000`` remains unchanged as it is longer than `width`. >>> s.str.zfill(3) 0 -01 1 001 2 1000 3 NaN 4 NaN dtype: object """ if not is_integer(width): msg = f"width must be of integer type, not {type(width).__name__}" raise TypeError(msg) result = self._data.array._str_zfill(width) return self._wrap_result(result)
Pad strings in the Series/Index by prepending '0' characters. Strings in the Series/Index are padded with '0' characters on the left of the string to reach a total string length `width`. Strings in the Series/Index with length greater or equal to `width` are unchanged. Parameters ---------- width : int Minimum length of resulting string; strings with length less than `width` be prepended with '0' characters. Returns ------- Series/Index of objects. A Series or Index where the strings are prepended with '0' characters. See Also -------- Series.str.rjust : Fills the left side of strings with an arbitrary character. Series.str.ljust : Fills the right side of strings with an arbitrary character. Series.str.pad : Fills the specified sides of strings with an arbitrary character. Series.str.center : Fills both sides of strings with an arbitrary character. Notes ----- Differs from :meth:`str.zfill` which has special handling for '+'/'-' in the string. Examples -------- >>> s = pd.Series(["-1", "1", "1000", 10, np.nan]) >>> s 0 -1 1 1 2 1000 3 10 4 NaN dtype: object Note that ``10`` and ``NaN`` are not strings, therefore they are converted to ``NaN``. The minus sign in ``'-1'`` is treated as a special character and the zero is added to the right of it (:meth:`str.zfill` would have moved it to the left). ``1000`` remains unchanged as it is longer than `width`. >>> s.str.zfill(3) 0 -01 1 001 2 1000 3 NaN 4 NaN dtype: object
python
pandas/core/strings/accessor.py
1,891
[ "self", "width" ]
true
2
8.4
pandas-dev/pandas
47,362
numpy
false
reader
protected Reader reader(Path path) throws IOException { return Files.newBufferedReader(path, StandardCharsets.UTF_8); }
Retrieves the data with the given keys at the given Properties file. @param path the file where the data resides @param keys the keys whose values will be retrieved @return the configuration data
java
clients/src/main/java/org/apache/kafka/common/config/provider/FileConfigProvider.java
134
[ "path" ]
Reader
true
1
6.96
apache/kafka
31,560
javadoc
false
loadDocument
Document loadDocument( InputSource inputSource, EntityResolver entityResolver, ErrorHandler errorHandler, int validationMode, boolean namespaceAware) throws Exception;
Load a {@link Document document} from the supplied {@link InputSource source}. @param inputSource the source of the document that is to be loaded @param entityResolver the resolver that is to be used to resolve any entities @param errorHandler used to report any errors during document loading @param validationMode the type of validation {@link org.springframework.util.xml.XmlValidationModeDetector#VALIDATION_DTD DTD} or {@link org.springframework.util.xml.XmlValidationModeDetector#VALIDATION_XSD XSD}) @param namespaceAware {@code true} if support for XML namespaces is to be provided @return the loaded {@link Document document} @throws Exception if an error occurs
java
spring-beans/src/main/java/org/springframework/beans/factory/xml/DocumentLoader.java
45
[ "inputSource", "entityResolver", "errorHandler", "validationMode", "namespaceAware" ]
Document
true
1
6.16
spring-projects/spring-framework
59,386
javadoc
false
isSameInstant
public static boolean isSameInstant(final Date date1, final Date date2) { Objects.requireNonNull(date1, "date1"); Objects.requireNonNull(date2, "date2"); return date1.getTime() == date2.getTime(); }
Tests whether two date objects represent the same instant in time. <p>This method compares the long millisecond time of the two objects.</p> @param date1 the first date, not altered, not null. @param date2 the second date, not altered, not null. @return true if they represent the same millisecond instant. @throws NullPointerException if either date is {@code null}. @since 2.1
java
src/main/java/org/apache/commons/lang3/time/DateUtils.java
916
[ "date1", "date2" ]
true
1
6.88
apache/commons-lang
2,896
javadoc
false
checkDisconnects
private void checkDisconnects(long now) { // any disconnects affecting requests that have already been transmitted will be handled // by NetworkClient, so we just need to check whether connections for any of the unsent // requests have been disconnected; if they have, then we complete the corresponding future // and set the disconnect flag in the ClientResponse for (Node node : unsent.nodes()) { if (client.connectionFailed(node)) { // Remove entry before invoking request callback to avoid callbacks handling // coordinator failures traversing the unsent list again. Collection<ClientRequest> requests = unsent.remove(node); for (ClientRequest request : requests) { RequestFutureCompletionHandler handler = (RequestFutureCompletionHandler) request.callback(); AuthenticationException authenticationException = client.authenticationException(node); handler.onComplete(new ClientResponse(request.makeHeader(request.requestBuilder().latestAllowedVersion()), request.callback(), request.destination(), request.createdTimeMs(), now, true, null, authenticationException, null)); } } } }
Check whether there is pending request. This includes both requests that have been transmitted (i.e. in-flight requests) and those which are awaiting transmission. @return A boolean indicating whether there is pending request
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java
438
[ "now" ]
void
true
2
6.88
apache/kafka
31,560
javadoc
false
getExternallyManagedConfigMembers
public Set<Member> getExternallyManagedConfigMembers() { synchronized (this.postProcessingLock) { return (this.externallyManagedConfigMembers != null ? Collections.unmodifiableSet(new LinkedHashSet<>(this.externallyManagedConfigMembers)) : Collections.emptySet()); } }
Get all externally managed configuration methods and fields (as an immutable Set). @since 5.3.11
java
spring-beans/src/main/java/org/springframework/beans/factory/support/RootBeanDefinition.java
481
[]
true
2
6.88
spring-projects/spring-framework
59,386
javadoc
false
runDownloader
@Override void runDownloader() { if (isCancelled() || isCompleted()) { logger.debug("Not running downloader because task is cancelled or completed"); return; } // by the time we reach here, the state will never be null assert this.state != null : "this.state is null. You need to call setState() before calling runDownloader()"; try { updateDatabases(); // n.b. this downloads bytes from the internet, it can take a while } catch (Exception e) { logger.error("exception during databases update", e); } try { cleanDatabases(); } catch (Exception e) { logger.error("exception during databases cleanup", e); } }
This method fetches the database file for the given database from the passed-in source, then indexes that database file into the .geoip_databases Elasticsearch index, deleting any old versions of the database from the index if they exist. @param name The name of the database to be downloaded and indexed into an Elasticsearch index @param checksum The checksum to compare to the computed checksum of the downloaded file @param source The supplier of an InputStream that will actually download the file
java
modules/ingest-geoip/src/main/java/org/elasticsearch/ingest/geoip/EnterpriseGeoIpDownloader.java
385
[]
void
true
5
6.72
elastic/elasticsearch
75,680
javadoc
false
intersect1d
def intersect1d(ar1, ar2, assume_unique=False, return_indices=False): """ Find the intersection of two arrays. Return the sorted, unique values that are in both of the input arrays. Parameters ---------- ar1, ar2 : array_like Input arrays. Will be flattened if not already 1D. assume_unique : bool If True, the input arrays are both assumed to be unique, which can speed up the calculation. If True but ``ar1`` or ``ar2`` are not unique, incorrect results and out-of-bounds indices could result. Default is False. return_indices : bool If True, the indices which correspond to the intersection of the two arrays are returned. The first instance of a value is used if there are multiple. Default is False. Returns ------- intersect1d : ndarray Sorted 1D array of common and unique elements. comm1 : ndarray The indices of the first occurrences of the common values in `ar1`. Only provided if `return_indices` is True. comm2 : ndarray The indices of the first occurrences of the common values in `ar2`. Only provided if `return_indices` is True. Examples -------- >>> import numpy as np >>> np.intersect1d([1, 3, 4, 3], [3, 1, 2, 1]) array([1, 3]) To intersect more than two arrays, use functools.reduce: >>> from functools import reduce >>> reduce(np.intersect1d, ([1, 3, 4, 3], [3, 1, 2, 1], [6, 3, 4, 2])) array([3]) To return the indices of the values common to the input arrays along with the intersected values: >>> x = np.array([1, 1, 2, 3, 4]) >>> y = np.array([2, 1, 4, 6]) >>> xy, x_ind, y_ind = np.intersect1d(x, y, return_indices=True) >>> x_ind, y_ind (array([0, 2, 4]), array([1, 0, 2])) >>> xy, x[x_ind], y[y_ind] (array([1, 2, 4]), array([1, 2, 4]), array([1, 2, 4])) """ ar1 = np.asanyarray(ar1) ar2 = np.asanyarray(ar2) if not assume_unique: if return_indices: ar1, ind1 = unique(ar1, return_index=True) ar2, ind2 = unique(ar2, return_index=True) else: ar1 = unique(ar1) ar2 = unique(ar2) else: ar1 = ar1.ravel() ar2 = ar2.ravel() aux = np.concatenate((ar1, ar2)) if return_indices: aux_sort_indices = np.argsort(aux, kind='mergesort') aux = aux[aux_sort_indices] else: aux.sort() mask = aux[1:] == aux[:-1] int1d = aux[:-1][mask] if return_indices: ar1_indices = aux_sort_indices[:-1][mask] ar2_indices = aux_sort_indices[1:][mask] - ar1.size if not assume_unique: ar1_indices = ind1[ar1_indices] ar2_indices = ind2[ar2_indices] return int1d, ar1_indices, ar2_indices else: return int1d
Find the intersection of two arrays. Return the sorted, unique values that are in both of the input arrays. Parameters ---------- ar1, ar2 : array_like Input arrays. Will be flattened if not already 1D. assume_unique : bool If True, the input arrays are both assumed to be unique, which can speed up the calculation. If True but ``ar1`` or ``ar2`` are not unique, incorrect results and out-of-bounds indices could result. Default is False. return_indices : bool If True, the indices which correspond to the intersection of the two arrays are returned. The first instance of a value is used if there are multiple. Default is False. Returns ------- intersect1d : ndarray Sorted 1D array of common and unique elements. comm1 : ndarray The indices of the first occurrences of the common values in `ar1`. Only provided if `return_indices` is True. comm2 : ndarray The indices of the first occurrences of the common values in `ar2`. Only provided if `return_indices` is True. Examples -------- >>> import numpy as np >>> np.intersect1d([1, 3, 4, 3], [3, 1, 2, 1]) array([1, 3]) To intersect more than two arrays, use functools.reduce: >>> from functools import reduce >>> reduce(np.intersect1d, ([1, 3, 4, 3], [3, 1, 2, 1], [6, 3, 4, 2])) array([3]) To return the indices of the values common to the input arrays along with the intersected values: >>> x = np.array([1, 1, 2, 3, 4]) >>> y = np.array([2, 1, 4, 6]) >>> xy, x_ind, y_ind = np.intersect1d(x, y, return_indices=True) >>> x_ind, y_ind (array([0, 2, 4]), array([1, 0, 2])) >>> xy, x[x_ind], y[y_ind] (array([1, 2, 4]), array([1, 2, 4]), array([1, 2, 4]))
python
numpy/lib/_arraysetops_impl.py
667
[ "ar1", "ar2", "assume_unique", "return_indices" ]
false
10
7.6
numpy/numpy
31,054
numpy
false
pprint_thing
def pprint_thing( thing: object, _nest_lvl: int = 0, escape_chars: EscapeChars | None = None, default_escapes: bool = False, quote_strings: bool = False, max_seq_items: int | None = None, ) -> str: """ This function is the sanctioned way of converting objects to a string representation and properly handles nested sequences. Parameters ---------- thing : anything to be formatted _nest_lvl : internal use only. pprint_thing() is mutually-recursive with pprint_sequence, this argument is used to keep track of the current nesting level, and limit it. escape_chars : list[str] or Mapping[str, str], optional Characters to escape. If a Mapping is passed the values are the replacements default_escapes : bool, default False Whether the input escape characters replaces or adds to the defaults max_seq_items : int or None, default None Pass through to other pretty printers to limit sequence printing Returns ------- str """ def as_escaped_string( thing: Any, escape_chars: EscapeChars | None = escape_chars ) -> str: translate = {"\t": r"\t", "\n": r"\n", "\r": r"\r", "'": r"\'"} if isinstance(escape_chars, Mapping): if default_escapes: translate.update(escape_chars) else: translate = escape_chars # type: ignore[assignment] escape_chars = list(escape_chars.keys()) else: escape_chars = escape_chars or () result = str(thing) for c in escape_chars: result = result.replace(c, translate[c]) return result if hasattr(thing, "__next__"): return str(thing) elif isinstance(thing, Mapping) and _nest_lvl < get_option( "display.pprint_nest_depth" ): result = _pprint_dict( thing, _nest_lvl, quote_strings=True, max_seq_items=max_seq_items ) elif is_sequence(thing) and _nest_lvl < get_option("display.pprint_nest_depth"): result = _pprint_seq( # error: Argument 1 to "_pprint_seq" has incompatible type "object"; # expected "ExtensionArray | ndarray[Any, Any] | Index | Series | # SequenceNotStr[Any] | range" thing, # type: ignore[arg-type] _nest_lvl, escape_chars=escape_chars, quote_strings=quote_strings, max_seq_items=max_seq_items, ) elif isinstance(thing, str) and quote_strings: result = f"'{as_escaped_string(thing)}'" else: result = as_escaped_string(thing) return result
This function is the sanctioned way of converting objects to a string representation and properly handles nested sequences. Parameters ---------- thing : anything to be formatted _nest_lvl : internal use only. pprint_thing() is mutually-recursive with pprint_sequence, this argument is used to keep track of the current nesting level, and limit it. escape_chars : list[str] or Mapping[str, str], optional Characters to escape. If a Mapping is passed the values are the replacements default_escapes : bool, default False Whether the input escape characters replaces or adds to the defaults max_seq_items : int or None, default None Pass through to other pretty printers to limit sequence printing Returns ------- str
python
pandas/io/formats/printing.py
174
[ "thing", "_nest_lvl", "escape_chars", "default_escapes", "quote_strings", "max_seq_items" ]
str
true
15
6.8
pandas-dev/pandas
47,362
numpy
false
collect
@SafeVarargs public static <T, R, A> R collect(final Collector<? super T, A, R> collector, final T... array) { return Streams.of(array).collect(collector); }
Delegates to {@link Stream#collect(Collector)} for a Stream on the given array. @param <T> The type of the array elements. @param <R> the type of the result. @param <A> the intermediate accumulation type of the {@code Collector}. @param collector the {@code Collector} describing the reduction. @param array The array, assumed to be unmodified during use, a null array treated as an empty array. @return the result of the reduction @see Stream#collect(Collector) @see Arrays#stream(Object[]) @see Collectors @since 3.16.0
java
src/main/java/org/apache/commons/lang3/stream/LangCollectors.java
110
[ "collector" ]
R
true
1
6.64
apache/commons-lang
2,896
javadoc
false
finalize
@SuppressWarnings({"removal", "Finalize"}) // b/260137033 @Override protected void finalize() { if (state.get().equals(OPEN)) { logger.get().log(SEVERE, "Uh oh! An open ClosingFuture has leaked and will close: {0}", this); FluentFuture<V> unused = finishToFuture(); } }
A generic {@link Combiner} that lets you use a lambda or method reference to combine five {@link ClosingFuture}s. Use {@link #whenAllSucceed(ClosingFuture, ClosingFuture, ClosingFuture, ClosingFuture, ClosingFuture)} to start this combination. @param <V1> the type returned by the first future @param <V2> the type returned by the second future @param <V3> the type returned by the third future @param <V4> the type returned by the fourth future @param <V5> the type returned by the fifth future
java
android/guava/src/com/google/common/util/concurrent/ClosingFuture.java
2,096
[]
void
true
2
6.56
google/guava
51,352
javadoc
false
optimizedTextOrNull
XContentString optimizedTextOrNull() throws IOException;
Returns an instance of {@link Map} holding parsed map. Serves as a replacement for the "map", "mapOrdered" and "mapStrings" methods above. @param mapFactory factory for creating new {@link Map} objects @param mapValueParser parser for parsing a single map value @param <T> map value type @return {@link Map} object
java
libs/x-content/src/main/java/org/elasticsearch/xcontent/XContentParser.java
114
[]
XContentString
true
1
6.32
elastic/elasticsearch
75,680
javadoc
false
setUncaughtExceptionCaptureCallback
function setUncaughtExceptionCaptureCallback(fn) { if (fn === null) { exceptionHandlerState.captureFn = fn; shouldAbortOnUncaughtToggle[0] = 1; process.report.reportOnUncaughtException = exceptionHandlerState.reportFlag; return; } if (typeof fn !== 'function') { throw new ERR_INVALID_ARG_TYPE('fn', ['Function', 'null'], fn); } if (exceptionHandlerState.captureFn !== null) { throw new ERR_UNCAUGHT_EXCEPTION_CAPTURE_ALREADY_SET(); } exceptionHandlerState.captureFn = fn; shouldAbortOnUncaughtToggle[0] = 0; exceptionHandlerState.reportFlag = process.report.reportOnUncaughtException === true; process.report.reportOnUncaughtException = false; }
Evaluate an ESM entry point and return the promise that gets fulfilled after it finishes evaluation. @param {string} source Source code the ESM @param {boolean} print Whether the result should be printed. @returns {Promise}
javascript
lib/internal/process/execution.js
112
[ "fn" ]
false
4
6.08
nodejs/node
114,839
jsdoc
false
write_gh_step_summary
def write_gh_step_summary(md: str, *, append_content: bool = True) -> bool: """ Write Markdown content to the GitHub Step Summary file if GITHUB_STEP_SUMMARY is set. append_content: default true, if True, append to the end of the file, else overwrite the whole file Returns: True if written successfully (in GitHub Actions environment), False if skipped (e.g., running locally where the variable is not set). """ sp = gh_summary_path() if not sp: logger.info("[gh-summary] GITHUB_STEP_SUMMARY not set, skipping write.") return False md_clean = textwrap.dedent(md).strip() + "\n" mode = "a" if append_content else "w" with sp.open(mode, encoding="utf-8") as f: f.write(md_clean) return True
Write Markdown content to the GitHub Step Summary file if GITHUB_STEP_SUMMARY is set. append_content: default true, if True, append to the end of the file, else overwrite the whole file Returns: True if written successfully (in GitHub Actions environment), False if skipped (e.g., running locally where the variable is not set).
python
.ci/lumen_cli/cli/lib/common/gh_summary.py
61
[ "md", "append_content" ]
bool
true
3
7.92
pytorch/pytorch
96,034
unknown
false
ensure_plugins_loaded
def ensure_plugins_loaded(): """ Load plugins from plugins directory and entrypoints. Plugins are only loaded if they have not been previously loaded. """ from airflow.observability.stats import Stats global plugins if plugins is not None: log.debug("Plugins are already loaded. Skipping.") return if not settings.PLUGINS_FOLDER: raise ValueError("Plugins folder is not set") log.debug("Loading plugins") with Stats.timer() as timer: plugins = [] load_plugins_from_plugin_directory() load_entrypoint_plugins() if not settings.LAZY_LOAD_PROVIDERS: load_providers_plugins() if plugins: log.debug("Loading %d plugin(s) took %.2f seconds", len(plugins), timer.duration)
Load plugins from plugins directory and entrypoints. Plugins are only loaded if they have not been previously loaded.
python
airflow-core/src/airflow/plugins_manager.py
334
[]
false
5
6.08
apache/airflow
43,597
unknown
false
close
public static void close(final Exception ex, final Closeable... objects) throws IOException { Exception firstException = ex; for (final Closeable object : objects) { try { close(object); } catch (final IOException | RuntimeException e) { firstException = addOrSuppress(firstException, e); } } if (firstException != null) { throwRuntimeOrIOException(firstException); } }
Closes all given {@link Closeable}s. Some of the {@linkplain Closeable}s may be null; they are ignored. After everything is closed, the method adds any exceptions as suppressed to the original exception, or throws the first exception it hit if {@code Exception} is null. If no exceptions are encountered and the passed in exception is null, it completes normally. @param objects objects to close
java
libs/core/src/main/java/org/elasticsearch/core/IOUtils.java
83
[ "ex" ]
void
true
3
6.88
elastic/elasticsearch
75,680
javadoc
false
inner
def inner(*args: _P.args, **kwargs: _P.kwargs) -> _R: """Call the original function and cache the result. Args: *args: Positional arguments to pass to the function. **kwargs: Keyword arguments to pass to the function. Returns: The result of calling the original function. """ # Call the function to compute the result result = fn(*args, **kwargs) # Generate cache key from parameters cache_key = self._make_key(custom_params_encoder, *args, **kwargs) # Encode params for human-readable dump if custom_params_encoder is not None: encoded_params = custom_params_encoder(*args, **kwargs) else: encoded_params = { "args": args, "kwargs": kwargs, } # Encode the result if encoder is provided if custom_result_encoder is not None: # Get the encoder function by calling the factory with params encoder_fn = custom_result_encoder(*args, **kwargs) encoded_result = encoder_fn(result) else: encoded_result = result # Store CacheEntry in cache cache_entry = CacheEntry( encoded_params=encoded_params, encoded_result=encoded_result, ) self._cache.insert(cache_key, cache_entry) # Return the original result (not the encoded version) return result
Call the original function and cache the result. Args: *args: Positional arguments to pass to the function. **kwargs: Keyword arguments to pass to the function. Returns: The result of calling the original function.
python
torch/_inductor/runtime/caching/interfaces.py
467
[]
_R
true
5
8.08
pytorch/pytorch
96,034
google
false
_sign_bundle_url
def _sign_bundle_url(url: str, bundle_name: str) -> str: """ Sign a bundle URL for integrity verification. :param url: The URL to sign :param bundle_name: The name of the bundle (used in the payload) :return: The signed URL token """ serializer = URLSafeSerializer(conf.get_mandatory_value("core", "fernet_key")) payload = { "url": url, "bundle_name": bundle_name, } return serializer.dumps(payload)
Sign a bundle URL for integrity verification. :param url: The URL to sign :param bundle_name: The name of the bundle (used in the payload) :return: The signed URL token
python
airflow-core/src/airflow/dag_processing/bundles/manager.py
148
[ "url", "bundle_name" ]
str
true
1
7.04
apache/airflow
43,597
sphinx
false
toString
@Override public String toString() { return getDir().toString(); }
Returns the application home directory. @return the home directory (never {@code null})
java
core/spring-boot/src/main/java/org/springframework/boot/system/ApplicationHome.java
174
[]
String
true
1
6.32
spring-projects/spring-boot
79,428
javadoc
false
_compute_oob_predictions
def _compute_oob_predictions(self, X, y): """Compute and set the OOB score. Parameters ---------- X : array-like of shape (n_samples, n_features) The data matrix. y : ndarray of shape (n_samples, n_outputs) The target matrix. Returns ------- oob_pred : ndarray of shape (n_samples, n_classes, n_outputs) or \ (n_samples, 1, n_outputs) The OOB predictions. """ # Prediction requires X to be in CSR format if issparse(X): X = X.tocsr() n_samples = y.shape[0] n_outputs = self.n_outputs_ if is_classifier(self) and hasattr(self, "n_classes_"): # n_classes_ is an ndarray at this stage # all the supported type of target will have the same number of # classes in all outputs oob_pred_shape = (n_samples, self.n_classes_[0], n_outputs) else: # for regression, n_classes_ does not exist and we create an empty # axis to be consistent with the classification case and make # the array operations compatible with the 2 settings oob_pred_shape = (n_samples, 1, n_outputs) oob_pred = np.zeros(shape=oob_pred_shape, dtype=np.float64) n_oob_pred = np.zeros((n_samples, n_outputs), dtype=np.int64) n_samples_bootstrap = _get_n_samples_bootstrap( n_samples, self.max_samples, ) for estimator in self.estimators_: unsampled_indices = _generate_unsampled_indices( estimator.random_state, n_samples, n_samples_bootstrap, ) y_pred = self._get_oob_predictions(estimator, X[unsampled_indices, :]) oob_pred[unsampled_indices, ...] += y_pred n_oob_pred[unsampled_indices, :] += 1 for k in range(n_outputs): if (n_oob_pred == 0).any(): warn( ( "Some inputs do not have OOB scores. This probably means " "too few trees were used to compute any reliable OOB " "estimates." ), UserWarning, ) n_oob_pred[n_oob_pred == 0] = 1 oob_pred[..., k] /= n_oob_pred[..., [k]] return oob_pred
Compute and set the OOB score. Parameters ---------- X : array-like of shape (n_samples, n_features) The data matrix. y : ndarray of shape (n_samples, n_outputs) The target matrix. Returns ------- oob_pred : ndarray of shape (n_samples, n_classes, n_outputs) or \ (n_samples, 1, n_outputs) The OOB predictions.
python
sklearn/ensemble/_forest.py
558
[ "self", "X", "y" ]
false
8
6
scikit-learn/scikit-learn
64,340
numpy
false
_validate_usecols_arg
def _validate_usecols_arg(usecols): """ Validate the 'usecols' parameter. Checks whether or not the 'usecols' parameter contains all integers (column selection by index), strings (column by name) or is a callable. Raises a ValueError if that is not the case. Parameters ---------- usecols : list-like, callable, or None List of columns to use when parsing or a callable that can be used to filter a list of table columns. Returns ------- usecols_tuple : tuple A tuple of (verified_usecols, usecols_dtype). 'verified_usecols' is either a set if an array-like is passed in or 'usecols' if a callable or None is passed in. 'usecols_dtype` is the inferred dtype of 'usecols' if an array-like is passed in or None if a callable or None is passed in. """ msg = ( "'usecols' must either be list-like of all strings, all unicode, " "all integers or a callable." ) if usecols is not None: if callable(usecols): return usecols, None if not is_list_like(usecols): # see gh-20529 # # Ensure it is iterable container but not string. raise ValueError(msg) usecols_dtype = lib.infer_dtype(usecols, skipna=False) if usecols_dtype not in ("empty", "integer", "string"): raise ValueError(msg) usecols = set(usecols) return usecols, usecols_dtype return usecols, None
Validate the 'usecols' parameter. Checks whether or not the 'usecols' parameter contains all integers (column selection by index), strings (column by name) or is a callable. Raises a ValueError if that is not the case. Parameters ---------- usecols : list-like, callable, or None List of columns to use when parsing or a callable that can be used to filter a list of table columns. Returns ------- usecols_tuple : tuple A tuple of (verified_usecols, usecols_dtype). 'verified_usecols' is either a set if an array-like is passed in or 'usecols' if a callable or None is passed in. 'usecols_dtype` is the inferred dtype of 'usecols' if an array-like is passed in or None if a callable or None is passed in.
python
pandas/io/parsers/base_parser.py
921
[ "usecols" ]
false
5
6.08
pandas-dev/pandas
47,362
numpy
false
connecting
public void connecting(String id, long now, String host) { NodeConnectionState connectionState = nodeState.get(id); if (connectionState != null && connectionState.host().equals(host)) { connectionState.lastConnectAttemptMs = now; connectionState.state = ConnectionState.CONNECTING; // Move to next resolved address, or if addresses are exhausted, mark node to be re-resolved connectionState.moveToNextAddress(); connectingNodes.add(id); return; } else if (connectionState != null) { log.info("Hostname for node {} changed from {} to {}.", id, connectionState.host(), host); } // Create a new NodeConnectionState if nodeState does not already contain one // for the specified id or if the hostname associated with the node id changed. nodeState.put(id, new NodeConnectionState(ConnectionState.CONNECTING, now, reconnectBackoff.backoff(0), connectionSetupTimeout.backoff(0), host, hostResolver, log)); connectingNodes.add(id); }
Enter the connecting state for the given connection, moving to a new resolved address if necessary. @param id the id of the connection @param now the current time in ms @param host the host of the connection, to be resolved internally if needed
java
clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java
147
[ "id", "now", "host" ]
void
true
4
7.04
apache/kafka
31,560
javadoc
false
make_checkerboard
def make_checkerboard( shape, n_clusters, *, noise=0.0, minval=10, maxval=100, shuffle=True, random_state=None, ): """Generate an array with block checkerboard structure for biclustering. Read more in the :ref:`User Guide <sample_generators>`. Parameters ---------- shape : tuple of shape (n_rows, n_cols) The shape of the result. n_clusters : int or array-like or shape (n_row_clusters, n_column_clusters) The number of row and column clusters. noise : float, default=0.0 The standard deviation of the gaussian noise. minval : float, default=10 Minimum value of a bicluster. maxval : float, default=100 Maximum value of a bicluster. shuffle : bool, default=True Shuffle the samples. random_state : int, RandomState instance or None, default=None Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See :term:`Glossary <random_state>`. Returns ------- X : ndarray of shape `shape` The generated array. rows : ndarray of shape (n_clusters, X.shape[0]) The indicators for cluster membership of each row. cols : ndarray of shape (n_clusters, X.shape[1]) The indicators for cluster membership of each column. See Also -------- make_biclusters : Generate an array with constant block diagonal structure for biclustering. References ---------- .. [1] Kluger, Y., Basri, R., Chang, J. T., & Gerstein, M. (2003). Spectral biclustering of microarray data: coclustering genes and conditions. Genome research, 13(4), 703-716. Examples -------- >>> from sklearn.datasets import make_checkerboard >>> data, rows, columns = make_checkerboard(shape=(300, 300), n_clusters=10, ... random_state=42) >>> data.shape (300, 300) >>> rows.shape (100, 300) >>> columns.shape (100, 300) >>> print(rows[0][:5], columns[0][:5]) [False False False True False] [False False False False False] """ generator = check_random_state(random_state) if hasattr(n_clusters, "__len__"): n_row_clusters, n_col_clusters = n_clusters else: n_row_clusters = n_col_clusters = n_clusters # row and column clusters of approximately equal sizes n_rows, n_cols = shape row_sizes = generator.multinomial( n_rows, np.repeat(1.0 / n_row_clusters, n_row_clusters) ) col_sizes = generator.multinomial( n_cols, np.repeat(1.0 / n_col_clusters, n_col_clusters) ) row_labels = np.hstack( [np.repeat(val, rep) for val, rep in zip(range(n_row_clusters), row_sizes)] ) col_labels = np.hstack( [np.repeat(val, rep) for val, rep in zip(range(n_col_clusters), col_sizes)] ) result = np.zeros(shape, dtype=np.float64) for i in range(n_row_clusters): for j in range(n_col_clusters): selector = np.outer(row_labels == i, col_labels == j) result[selector] += generator.uniform(minval, maxval) if noise > 0: result += generator.normal(scale=noise, size=result.shape) if shuffle: result, row_idx, col_idx = _shuffle(result, random_state) row_labels = row_labels[row_idx] col_labels = col_labels[col_idx] rows = np.vstack( [ row_labels == label for label in range(n_row_clusters) for _ in range(n_col_clusters) ] ) cols = np.vstack( [ col_labels == label for _ in range(n_row_clusters) for label in range(n_col_clusters) ] ) return result, rows, cols
Generate an array with block checkerboard structure for biclustering. Read more in the :ref:`User Guide <sample_generators>`. Parameters ---------- shape : tuple of shape (n_rows, n_cols) The shape of the result. n_clusters : int or array-like or shape (n_row_clusters, n_column_clusters) The number of row and column clusters. noise : float, default=0.0 The standard deviation of the gaussian noise. minval : float, default=10 Minimum value of a bicluster. maxval : float, default=100 Maximum value of a bicluster. shuffle : bool, default=True Shuffle the samples. random_state : int, RandomState instance or None, default=None Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See :term:`Glossary <random_state>`. Returns ------- X : ndarray of shape `shape` The generated array. rows : ndarray of shape (n_clusters, X.shape[0]) The indicators for cluster membership of each row. cols : ndarray of shape (n_clusters, X.shape[1]) The indicators for cluster membership of each column. See Also -------- make_biclusters : Generate an array with constant block diagonal structure for biclustering. References ---------- .. [1] Kluger, Y., Basri, R., Chang, J. T., & Gerstein, M. (2003). Spectral biclustering of microarray data: coclustering genes and conditions. Genome research, 13(4), 703-716. Examples -------- >>> from sklearn.datasets import make_checkerboard >>> data, rows, columns = make_checkerboard(shape=(300, 300), n_clusters=10, ... random_state=42) >>> data.shape (300, 300) >>> rows.shape (100, 300) >>> columns.shape (100, 300) >>> print(rows[0][:5], columns[0][:5]) [False False False True False] [False False False False False]
python
sklearn/datasets/_samples_generator.py
2,256
[ "shape", "n_clusters", "noise", "minval", "maxval", "shuffle", "random_state" ]
false
7
7.12
scikit-learn/scikit-learn
64,340
numpy
false
createImportCallExpressionUMD
function createImportCallExpressionUMD(arg: Expression, containsLexicalThis: boolean): Expression { // (function (factory) { // ... (regular UMD) // } // })(function (require, exports, useSyncRequire) { // "use strict"; // Object.defineProperty(exports, "__esModule", { value: true }); // var __syncRequire = typeof module === "object" && typeof module.exports === "object"; // var __resolved = new Promise(function (resolve) { resolve(); }); // ..... // __syncRequire // ? __resolved.then(function () { return require(x); }) /*CommonJs Require*/ // : new Promise(function (_a, _b) { require([x], _a, _b); }); /*Amd Require*/ // }); needUMDDynamicImportHelper = true; if (isSimpleCopiableExpression(arg)) { const argClone = isGeneratedIdentifier(arg) ? arg : isStringLiteral(arg) ? factory.createStringLiteralFromNode(arg) : setEmitFlags(setTextRange(factory.cloneNode(arg), arg), EmitFlags.NoComments); return factory.createConditionalExpression( /*condition*/ factory.createIdentifier("__syncRequire"), /*questionToken*/ undefined, /*whenTrue*/ createImportCallExpressionCommonJS(arg), /*colonToken*/ undefined, /*whenFalse*/ createImportCallExpressionAMD(argClone, containsLexicalThis), ); } else { const temp = factory.createTempVariable(hoistVariableDeclaration); return factory.createComma( factory.createAssignment(temp, arg), factory.createConditionalExpression( /*condition*/ factory.createIdentifier("__syncRequire"), /*questionToken*/ undefined, /*whenTrue*/ createImportCallExpressionCommonJS(temp, /*isInlineable*/ true), /*colonToken*/ undefined, /*whenFalse*/ createImportCallExpressionAMD(temp, containsLexicalThis), ), ); } }
Visits the body of a Block to hoist declarations. @param node The node to visit.
typescript
src/compiler/transformers/module/module.ts
1,232
[ "arg", "containsLexicalThis" ]
true
5
6.72
microsoft/TypeScript
107,154
jsdoc
false
startAsync
@CanIgnoreReturnValue @Override public final Service startAsync() { if (monitor.enterIf(isStartable)) { try { snapshot = new StateSnapshot(STARTING); enqueueStartingEvent(); doStart(); } catch (Throwable startupFailure) { restoreInterruptIfIsInterruptedException(startupFailure); notifyFailed(startupFailure); } finally { monitor.leave(); dispatchListenerEvents(); } } else { throw new IllegalStateException("Service " + this + " has already been started"); } return this; }
This method is called by {@link #stopAsync} when the service is still starting (i.e. {@link #startAsync} has been called but {@link #notifyStarted} has not). Subclasses can override the method to cancel pending work and then call {@link #notifyStopped} to stop the service. <p>This method should return promptly; prefer to do work on a different thread where it is convenient. It is invoked exactly once on service shutdown, even when {@link #stopAsync} is called multiple times. <p>When this method is called {@link #state()} will return {@link State#STOPPING}, which is the external state observable by the caller of {@link #stopAsync}. @since 27.0
java
android/guava/src/com/google/common/util/concurrent/AbstractService.java
243
[]
Service
true
3
6.88
google/guava
51,352
javadoc
false
errorForResponse
public abstract Errors errorForResponse(R response);
Returns the error for the response. @param response The heartbeat response @return The error {@link Errors}
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractHeartbeatRequestManager.java
507
[ "response" ]
Errors
true
1
6.8
apache/kafka
31,560
javadoc
false
get_dag_by_file_location
def get_dag_by_file_location(dag_id: str): """Return DAG of a given dag_id by looking up file location.""" # TODO: AIP-66 - investigate more, can we use serdag? from airflow.dag_processing.dagbag import DagBag from airflow.models import DagModel # Benefit is that logging from other dags in dagbag will not appear dag_model = DagModel.get_current(dag_id) if dag_model is None: raise AirflowException( f"Dag {dag_id!r} could not be found; either it does not exist or it failed to parse." ) # This method is called only when we explicitly do not have a bundle name dagbag = DagBag(dag_folder=dag_model.fileloc) return dagbag.dags[dag_id]
Return DAG of a given dag_id by looking up file location.
python
airflow-core/src/airflow/utils/cli.py
230
[ "dag_id" ]
true
2
6
apache/airflow
43,597
unknown
false
streamPropertySources
private static Stream<PropertySource<?>> streamPropertySources(PropertySources sources) { return sources.stream() .flatMap(ConfigurationPropertySources::flatten) .filter(ConfigurationPropertySources::isIncluded); }
Return {@link Iterable} containing new {@link ConfigurationPropertySource} instances adapted from the given Spring {@link PropertySource PropertySources}. <p> This method will flatten any nested property sources and will filter all {@link StubPropertySource stub property sources}. Updates to the underlying source, identified by changes in the sources returned by its iterator, will be automatically tracked. The underlying source should be thread safe, for example a {@link MutablePropertySources} @param sources the Spring property sources to adapt @return an {@link Iterable} containing newly adapted {@link SpringConfigurationPropertySource} instances
java
core/spring-boot/src/main/java/org/springframework/boot/context/properties/source/ConfigurationPropertySources.java
160
[ "sources" ]
true
1
6.24
spring-projects/spring-boot
79,428
javadoc
false
logNonMatchingType
private void logNonMatchingType(C callback, ClassCastException ex) { if (this.logger.isDebugEnabled()) { Class<?> expectedType = ResolvableType.forClass(this.callbackType).resolveGeneric(); String expectedTypeName = (expectedType != null) ? ClassUtils.getShortName(expectedType) + " type" : "type"; String message = "Non-matching " + expectedTypeName + " for callback " + ClassUtils.getShortName(this.callbackType) + ": " + callback; this.logger.debug(message, ex); } }
Use a specific filter to determine when a callback should apply. If no explicit filter is set filter will be attempted using the generic type on the callback type. @param filter the filter to use @return this instance @since 3.4.8
java
core/spring-boot/src/main/java/org/springframework/boot/util/LambdaSafe.java
217
[ "callback", "ex" ]
void
true
3
8.24
spring-projects/spring-boot
79,428
javadoc
false
commit_sha
def commit_sha(): """Returns commit SHA of current repo. Cached for various usages.""" command_result = run_command(["git", "rev-parse", "HEAD"], capture_output=True, text=True, check=False) if command_result.stdout: return command_result.stdout.strip() return "COMMIT_SHA_NOT_FOUND"
Returns commit SHA of current repo. Cached for various usages.
python
dev/breeze/src/airflow_breeze/utils/run_utils.py
413
[]
false
2
6.08
apache/airflow
43,597
unknown
false
close
@Override public void close() { if (closed) { assert false : "ExponentialHistogramMerger closed multiple times"; } else { closed = true; if (result != null) { result.close(); result = null; } if (buffer != null) { buffer.close(); buffer = null; } circuitBreaker.adjustBreaker(-BASE_SIZE); } }
Creates a new instance with the specified bucket limit. @param bucketLimit the maximum number of buckets the result histogram is allowed to have, must be at least 4 @param circuitBreaker the circuit breaker to use to limit memory allocations
java
libs/exponential-histogram/src/main/java/org/elasticsearch/exponentialhistogram/ExponentialHistogramMerger.java
110
[]
void
true
4
6.56
elastic/elasticsearch
75,680
javadoc
false
asToken
function asToken<TKind extends SyntaxKind>(value: TKind | Token<TKind>): Token<TKind> { return typeof value === "number" ? createToken(value) : value; }
Lifts a NodeArray containing only Statement nodes to a block. @param nodes The NodeArray.
typescript
src/compiler/factory/nodeFactory.ts
7,157
[ "value" ]
true
2
6.64
microsoft/TypeScript
107,154
jsdoc
false
redirect
def redirect( location: str, code: int = 302, Response: type[BaseResponse] | None = None ) -> BaseResponse: """Create a redirect response object. If :data:`~flask.current_app` is available, it will use its :meth:`~flask.Flask.redirect` method, otherwise it will use :func:`werkzeug.utils.redirect`. :param location: The URL to redirect to. :param code: The status code for the redirect. :param Response: The response class to use. Not used when ``current_app`` is active, which uses ``app.response_class``. .. versionadded:: 2.2 Calls ``current_app.redirect`` if available instead of always using Werkzeug's default ``redirect``. """ if (ctx := _cv_app.get(None)) is not None: return ctx.app.redirect(location, code=code) return _wz_redirect(location, code=code, Response=Response)
Create a redirect response object. If :data:`~flask.current_app` is available, it will use its :meth:`~flask.Flask.redirect` method, otherwise it will use :func:`werkzeug.utils.redirect`. :param location: The URL to redirect to. :param code: The status code for the redirect. :param Response: The response class to use. Not used when ``current_app`` is active, which uses ``app.response_class``. .. versionadded:: 2.2 Calls ``current_app.redirect`` if available instead of always using Werkzeug's default ``redirect``.
python
src/flask/helpers.py
241
[ "location", "code", "Response" ]
BaseResponse
true
2
6.4
pallets/flask
70,946
sphinx
false
toLong
public Long toLong() { return Long.valueOf(longValue()); }
Gets this mutable as an instance of Long. @return a Long instance containing the value from this mutable, never null.
java
src/main/java/org/apache/commons/lang3/mutable/MutableLong.java
363
[]
Long
true
1
6.96
apache/commons-lang
2,896
javadoc
false
instantiate
@Override public Object instantiate(RootBeanDefinition bd, @Nullable String beanName, BeanFactory owner) { // Don't override the class with CGLIB if no overrides. if (!bd.hasMethodOverrides()) { Constructor<?> constructorToUse; synchronized (bd.constructorArgumentLock) { constructorToUse = (Constructor<?>) bd.resolvedConstructorOrFactoryMethod; if (constructorToUse == null) { Class<?> clazz = bd.getBeanClass(); if (clazz.isInterface()) { throw new BeanInstantiationException(clazz, "Specified class is an interface"); } try { constructorToUse = clazz.getDeclaredConstructor(); bd.resolvedConstructorOrFactoryMethod = constructorToUse; } catch (Throwable ex) { throw new BeanInstantiationException(clazz, "No default constructor found", ex); } } } return BeanUtils.instantiateClass(constructorToUse); } else { // Must generate CGLIB subclass. return instantiateWithMethodInjection(bd, beanName, owner); } }
Invoke the given {@code instanceSupplier} with the factory method exposed as being invoked. @param method the factory method to expose @param instanceSupplier the instance supplier @param <T> the type of the instance @return the result of the instance supplier @since 6.2
java
spring-beans/src/main/java/org/springframework/beans/factory/support/SimpleInstantiationStrategy.java
85
[ "bd", "beanName", "owner" ]
Object
true
5
7.76
spring-projects/spring-framework
59,386
javadoc
false
_are_inputs_layout_compatible
def _are_inputs_layout_compatible(self, layouts: list[Layout]) -> bool: """ Evaluates whether input layouts are compatible for General Matrix Multiply (GEMM). This function checks compatibility of A, B, and possibly C operand layouts for a General Matrix Multiply (GEMM) operation, expressed as 'alpha * matmul(A, B) + beta * C'. It verifies requirements such as matching data types, minimum rank, and suitability for broadcasting, as defined by PyTorch operations like `torch.matmul`, `torch.aten.mm`, `addmm`, `bmm`, `baddbmm`, etc. Args: layouts (List[Layout]): List containing 2 or 3 Layout objects representing the input matrices A, B, and possibly C. Returns: bool: True if layouts are GEMM compatible, otherwise False. """ assert 2 <= len(layouts) <= 5 # Check if A and B are compatible A_layout, B_layout = layouts[:2] if len(A_layout.size) < 1: return False if len(B_layout.size) < 1: return False A_size = list(V.graph.sizevars.size_hints(A_layout.size)) B_size = list(V.graph.sizevars.size_hints(B_layout.size)) if len(A_size) < 2: A_size.insert(0, 1) if len(B_size) < 2: A_size.insert(1, 1) # Are batch dims broadcastable? while len(A_size) < len(B_size): A_size.insert(0, 1) while len(B_size) < len(A_size): B_size.insert(0, 1) K = max(A_size[-1], B_size[-2]) M = A_size[-2] N = B_size[-1] if K != A_size[-1] and A_size[-1] != 1: return False if K != B_size[-2] and B_size[-1] != 1: return False # check batch dim broadcastable for i in range(len(A_size) - 2): if A_size[i] != B_size[i] and A_size[i] != 1 and B_size[i] != 1: return False if len(layouts) == 3: C_layout = layouts[2] C_size = [V.graph.sizevars.size_hint(i) for i in C_layout.size] while len(C_size) < len(A_size): C_size.insert(0, 1) # check batch dims for i in range(len(A_size) - 2): bd = max(A_size[i], B_size[i]) if bd != C_size[i] and C_size[i] != 1: return False if len(C_size) > len(A_size): # This may happen if the last elements of C are contiguous and # their multiplied size equals the last dim size of B if M != C_size[len(A_size) - 2] and C_size[len(A_size) - 2] != 1: return False remaining_size = 1 for i in range(len(A_size) - 1, len(C_size)): remaining_size *= C_size[i] if N != remaining_size and remaining_size != 1: return False return True assert len(C_size) == len(A_size) if M != C_size[-2] and C_size[-2] != 1: return False if N != C_size[-1] and C_size[-1] != 1: return False return True
Evaluates whether input layouts are compatible for General Matrix Multiply (GEMM). This function checks compatibility of A, B, and possibly C operand layouts for a General Matrix Multiply (GEMM) operation, expressed as 'alpha * matmul(A, B) + beta * C'. It verifies requirements such as matching data types, minimum rank, and suitability for broadcasting, as defined by PyTorch operations like `torch.matmul`, `torch.aten.mm`, `addmm`, `bmm`, `baddbmm`, etc. Args: layouts (List[Layout]): List containing 2 or 3 Layout objects representing the input matrices A, B, and possibly C. Returns: bool: True if layouts are GEMM compatible, otherwise False.
python
torch/_inductor/codegen/cuda/gemm_template.py
1,390
[ "self", "layouts" ]
bool
true
30
6.64
pytorch/pytorch
96,034
google
false
escapeUnsafe
protected abstract char @Nullable [] escapeUnsafe(int cp);
Escapes a code point that has no direct explicit value in the replacement array and lies outside the stated safe range. Subclasses should override this method to provide generalized escaping for code points if required. <p>Note that arrays returned by this method must not be modified once they have been returned. However it is acceptable to return the same array multiple times (even for different input characters). @param cp the Unicode code point to escape @return the replacement characters, or {@code null} if no escaping was required
java
android/guava/src/com/google/common/escape/ArrayBasedUnicodeEscaper.java
204
[ "cp" ]
true
1
6.8
google/guava
51,352
javadoc
false
ultimateTargetClass
public static Class<?> ultimateTargetClass(Object candidate) { Assert.notNull(candidate, "Candidate object must not be null"); Object current = candidate; Class<?> result = null; while (current instanceof TargetClassAware targetClassAware) { result = targetClassAware.getTargetClass(); current = getSingletonTarget(current); } if (result == null) { result = (AopUtils.isCglibProxy(candidate) ? candidate.getClass().getSuperclass() : candidate.getClass()); } return result; }
Determine the ultimate target class of the given bean instance, traversing not only a top-level proxy but any number of nested proxies as well &mdash; as long as possible without side effects, that is, just for singleton targets. @param candidate the instance to check (might be an AOP proxy) @return the ultimate target class (or the plain class of the given object as fallback; never {@code null}) @see org.springframework.aop.TargetClassAware#getTargetClass() @see Advised#getTargetSource()
java
spring-aop/src/main/java/org/springframework/aop/framework/AopProxyUtils.java
82
[ "candidate" ]
true
4
7.92
spring-projects/spring-framework
59,386
javadoc
false
empty
def empty(shape, dtype=None, order='C'): """Return a new matrix of given shape and type, without initializing entries. Parameters ---------- shape : int or tuple of int Shape of the empty matrix. dtype : data-type, optional Desired output data-type. order : {'C', 'F'}, optional Whether to store multi-dimensional data in row-major (C-style) or column-major (Fortran-style) order in memory. See Also -------- numpy.empty : Equivalent array function. matlib.zeros : Return a matrix of zeros. matlib.ones : Return a matrix of ones. Notes ----- Unlike other matrix creation functions (e.g. `matlib.zeros`, `matlib.ones`), `matlib.empty` does not initialize the values of the matrix, and may therefore be marginally faster. However, the values stored in the newly allocated matrix are arbitrary. For reproducible behavior, be sure to set each element of the matrix before reading. Examples -------- >>> import numpy.matlib >>> np.matlib.empty((2, 2)) # filled with random data matrix([[ 6.76425276e-320, 9.79033856e-307], # random [ 7.39337286e-309, 3.22135945e-309]]) >>> np.matlib.empty((2, 2), dtype=np.int_) matrix([[ 6600475, 0], # random [ 6586976, 22740995]]) """ return ndarray.__new__(matrix, shape, dtype, order=order)
Return a new matrix of given shape and type, without initializing entries. Parameters ---------- shape : int or tuple of int Shape of the empty matrix. dtype : data-type, optional Desired output data-type. order : {'C', 'F'}, optional Whether to store multi-dimensional data in row-major (C-style) or column-major (Fortran-style) order in memory. See Also -------- numpy.empty : Equivalent array function. matlib.zeros : Return a matrix of zeros. matlib.ones : Return a matrix of ones. Notes ----- Unlike other matrix creation functions (e.g. `matlib.zeros`, `matlib.ones`), `matlib.empty` does not initialize the values of the matrix, and may therefore be marginally faster. However, the values stored in the newly allocated matrix are arbitrary. For reproducible behavior, be sure to set each element of the matrix before reading. Examples -------- >>> import numpy.matlib >>> np.matlib.empty((2, 2)) # filled with random data matrix([[ 6.76425276e-320, 9.79033856e-307], # random [ 7.39337286e-309, 3.22135945e-309]]) >>> np.matlib.empty((2, 2), dtype=np.int_) matrix([[ 6600475, 0], # random [ 6586976, 22740995]])
python
numpy/matlib.py
25
[ "shape", "dtype", "order" ]
false
1
6
numpy/numpy
31,054
numpy
false
combine
@CanIgnoreReturnValue Builder<E> combine(Builder<E> other) { requireNonNull(impl); requireNonNull(other.impl); /* * For discussion of requireNonNull, see the comment on the field. * * (And I don't believe there's any situation in which we call x.combine(y) when x is a plain * ImmutableSet.Builder but y is an ImmutableSortedSet.Builder (or vice versa). Certainly * ImmutableSortedSet.Builder.combine() is written as if its argument will never be a plain * ImmutableSet.Builder: It casts immediately to ImmutableSortedSet.Builder.) */ copyIfNecessary(); this.impl = this.impl.combine(other.impl); return this; }
Adds each element of {@code elements} to the {@code ImmutableSet}, ignoring duplicate elements (only the first duplicate element is added). @param elements the elements to add @return this {@code Builder} object @throws NullPointerException if {@code elements} is null or contains a null element
java
guava/src/com/google/common/collect/ImmutableSet.java
556
[ "other" ]
true
1
6.56
google/guava
51,352
javadoc
false
indexer_at_time
def indexer_at_time(self, time, asof: bool = False) -> npt.NDArray[np.intp]: """ Return index locations of values at particular time of day. Parameters ---------- time : datetime.time or str Time passed in either as object (datetime.time) or as string in appropriate format ("%H:%M", "%H%M", "%I:%M%p", "%I%M%p", "%H:%M:%S", "%H%M%S", "%I:%M:%S%p", "%I%M%S%p"). asof : bool, default False This parameter is currently not supported. Returns ------- np.ndarray[np.intp] Index locations of values at given `time` of day. See Also -------- indexer_between_time : Get index locations of values between particular times of day. DataFrame.at_time : Select values at particular time of day. Examples -------- >>> idx = pd.DatetimeIndex( ... ["1/1/2020 10:00", "2/1/2020 11:00", "3/1/2020 10:00"] ... ) >>> idx.indexer_at_time("10:00") array([0, 2]) """ if asof: raise NotImplementedError("'asof' argument is not supported") if isinstance(time, str): from dateutil.parser import parse orig = time try: alt = to_time(time) except ValueError: warnings.warn( # GH#50839 f"The string '{orig}' cannot be parsed using pd.core.tools.to_time " f"and in a future version will raise. " "Use an unambiguous time string format or explicitly cast to " "`datetime.time` before calling.", Pandas4Warning, stacklevel=find_stack_level(), ) time = parse(time).time() else: try: time = parse(time).time() except ValueError: # e.g. '23550' raises dateutil.parser._parser.ParserError time = alt if alt != time: warnings.warn( # GH#50839 f"The string '{orig}' is currently parsed as {time} " f"but in a future version will be parsed as {alt}, consistent" "with `between_time` behavior. To avoid this warning, " "use an unambiguous string format or explicitly cast to " "`datetime.time` before calling.", Pandas4Warning, stacklevel=find_stack_level(), ) if time.tzinfo: if self.tz is None: raise ValueError("Index must be timezone aware.") time_micros = self.tz_convert(time.tzinfo)._get_time_micros() else: time_micros = self._get_time_micros() micros = _time_to_micros(time) return (time_micros == micros).nonzero()[0]
Return index locations of values at particular time of day. Parameters ---------- time : datetime.time or str Time passed in either as object (datetime.time) or as string in appropriate format ("%H:%M", "%H%M", "%I:%M%p", "%I%M%p", "%H:%M:%S", "%H%M%S", "%I:%M:%S%p", "%I%M%S%p"). asof : bool, default False This parameter is currently not supported. Returns ------- np.ndarray[np.intp] Index locations of values at given `time` of day. See Also -------- indexer_between_time : Get index locations of values between particular times of day. DataFrame.at_time : Select values at particular time of day. Examples -------- >>> idx = pd.DatetimeIndex( ... ["1/1/2020 10:00", "2/1/2020 11:00", "3/1/2020 10:00"] ... ) >>> idx.indexer_at_time("10:00") array([0, 2])
python
pandas/core/indexes/datetimes.py
1,102
[ "self", "time", "asof" ]
npt.NDArray[np.intp]
true
8
8.08
pandas-dev/pandas
47,362
numpy
false
period_array
def period_array( data: Sequence[Period | str | None] | AnyArrayLike, freq: str | Tick | BaseOffset | None = None, copy: bool = False, ) -> PeriodArray: """ Construct a new PeriodArray from a sequence of Period scalars. Parameters ---------- data : Sequence of Period objects A sequence of Period objects. These are required to all have the same ``freq.`` Missing values can be indicated by ``None`` or ``pandas.NaT``. freq : str, Tick, or Offset The frequency of every element of the array. This can be specified to avoid inferring the `freq` from `data`. copy : bool, default False Whether to ensure a copy of the data is made. Returns ------- PeriodArray See Also -------- PeriodArray pandas.PeriodIndex Examples -------- >>> period_array([pd.Period("2017", freq="Y"), pd.Period("2018", freq="Y")]) <PeriodArray> ['2017', '2018'] Length: 2, dtype: period[Y-DEC] >>> period_array([pd.Period("2017", freq="Y"), pd.Period("2018", freq="Y"), pd.NaT]) <PeriodArray> ['2017', '2018', 'NaT'] Length: 3, dtype: period[Y-DEC] Integers that look like years are handled >>> period_array([2000, 2001, 2002], freq="D") <PeriodArray> ['2000-01-01', '2001-01-01', '2002-01-01'] Length: 3, dtype: period[D] Datetime-like strings may also be passed >>> period_array(["2000-Q1", "2000-Q2", "2000-Q3", "2000-Q4"], freq="Q") <PeriodArray> ['2000Q1', '2000Q2', '2000Q3', '2000Q4'] Length: 4, dtype: period[Q-DEC] """ data_dtype = getattr(data, "dtype", None) if lib.is_np_dtype(data_dtype, "M"): return PeriodArray._from_datetime64(data, freq) if isinstance(data_dtype, PeriodDtype): out = PeriodArray(data) if freq is not None: if freq == data_dtype.freq: return out return out.asfreq(freq) return out # other iterable of some kind if not isinstance(data, (np.ndarray, list, tuple, ABCSeries)): data = list(data) arrdata = np.asarray(data) dtype: PeriodDtype | None if freq: dtype = PeriodDtype(freq) else: dtype = None if arrdata.dtype.kind == "f" and len(arrdata) > 0: raise TypeError("PeriodIndex does not allow floating point in construction") if arrdata.dtype.kind in "iu": arr = arrdata.astype(np.int64, copy=False) # error: Argument 2 to "from_ordinals" has incompatible type "Union[str, # Tick, None]"; expected "Union[timedelta, BaseOffset, str]" ordinals = libperiod.from_ordinals(arr, freq) # type: ignore[arg-type] return PeriodArray(ordinals, dtype=dtype) data = ensure_object(arrdata) if freq is None: freq = libperiod.extract_freq(data) dtype = PeriodDtype(freq) return PeriodArray._from_sequence(data, dtype=dtype)
Construct a new PeriodArray from a sequence of Period scalars. Parameters ---------- data : Sequence of Period objects A sequence of Period objects. These are required to all have the same ``freq.`` Missing values can be indicated by ``None`` or ``pandas.NaT``. freq : str, Tick, or Offset The frequency of every element of the array. This can be specified to avoid inferring the `freq` from `data`. copy : bool, default False Whether to ensure a copy of the data is made. Returns ------- PeriodArray See Also -------- PeriodArray pandas.PeriodIndex Examples -------- >>> period_array([pd.Period("2017", freq="Y"), pd.Period("2018", freq="Y")]) <PeriodArray> ['2017', '2018'] Length: 2, dtype: period[Y-DEC] >>> period_array([pd.Period("2017", freq="Y"), pd.Period("2018", freq="Y"), pd.NaT]) <PeriodArray> ['2017', '2018', 'NaT'] Length: 3, dtype: period[Y-DEC] Integers that look like years are handled >>> period_array([2000, 2001, 2002], freq="D") <PeriodArray> ['2000-01-01', '2001-01-01', '2002-01-01'] Length: 3, dtype: period[D] Datetime-like strings may also be passed >>> period_array(["2000-Q1", "2000-Q2", "2000-Q3", "2000-Q4"], freq="Q") <PeriodArray> ['2000Q1', '2000Q2', '2000Q3', '2000Q4'] Length: 4, dtype: period[Q-DEC]
python
pandas/core/arrays/period.py
1,193
[ "data", "freq", "copy" ]
PeriodArray
true
12
8.08
pandas-dev/pandas
47,362
numpy
false
serialize
@SuppressWarnings("resource") // outputStream is managed by the caller public static void serialize(final Serializable obj, final OutputStream outputStream) { Objects.requireNonNull(outputStream, "outputStream"); try (ObjectOutputStream out = new ObjectOutputStream(outputStream)) { out.writeObject(obj); } catch (final IOException ex) { throw new SerializationException(ex); } }
Serializes an {@link Object} to the specified stream. <p>The stream will be closed once the object is written. This avoids the need for a finally clause, and maybe also exception handling, in the application code.</p> <p>The stream passed in is not buffered internally within this method. This is the responsibility of your application if desired.</p> @param obj the object to serialize to bytes, may be null. @param outputStream the stream to write to, must not be null. @throws NullPointerException if {@code outputStream} is {@code null}. @throws SerializationException (runtime) if the serialization fails.
java
src/main/java/org/apache/commons/lang3/SerializationUtils.java
256
[ "obj", "outputStream" ]
void
true
2
6.72
apache/commons-lang
2,896
javadoc
false
appendWithSeparators
public StrBuilder appendWithSeparators(final Object[] array, final String separator) { if (array != null && array.length > 0) { final String sep = Objects.toString(separator, ""); append(array[0]); for (int i = 1; i < array.length; i++) { append(sep); append(array[i]); } } return this; }
Appends an array placing separators between each value, but not before the first or after the last. Appending a null array will have no effect. Each object is appended using {@link #append(Object)}. @param array the array to append @param separator the separator to use, null means no separator @return {@code this} instance.
java
src/main/java/org/apache/commons/lang3/text/StrBuilder.java
1,450
[ "array", "separator" ]
StrBuilder
true
4
8.24
apache/commons-lang
2,896
javadoc
false
nullToEmpty
public static Long[] nullToEmpty(final Long[] array) { return nullTo(array, EMPTY_LONG_OBJECT_ARRAY); }
Defensive programming technique to change a {@code null} reference to an empty one. <p> This method returns an empty array for a {@code null} input array. </p> <p> As a memory optimizing technique an empty array passed in will be overridden with the empty {@code public static} references in this class. </p> @param array the array to check for {@code null} or empty. @return the same array, {@code public static} empty array if {@code null} or empty input. @since 2.5
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
4,546
[ "array" ]
true
1
6.96
apache/commons-lang
2,896
javadoc
false
ensureOpenForRecordAppend
private void ensureOpenForRecordAppend() { if (appendStream == CLOSED_STREAM) throw new IllegalStateException("Tried to append a record, but MemoryRecordsBuilder is closed for record appends"); }
Append the record at the next consecutive offset. If no records have been appended yet, use the base offset of this builder. @param record The record to add
java
clients/src/main/java/org/apache/kafka/common/record/MemoryRecordsBuilder.java
804
[]
void
true
2
6.96
apache/kafka
31,560
javadoc
false
format
@Override public StringBuffer format(final Date date, final StringBuffer buf) { final Calendar c = newCalendar(); c.setTime(date); return (StringBuffer) applyRules(c, (Appendable) buf); }
Compares two objects for equality. @param obj the object to compare to. @return {@code true} if equal.
java
src/main/java/org/apache/commons/lang3/time/FastDatePrinter.java
1,172
[ "date", "buf" ]
StringBuffer
true
1
7.04
apache/commons-lang
2,896
javadoc
false
indexOf
public int indexOf(final StrMatcher matcher, int startIndex) { startIndex = Math.max(startIndex, 0); if (matcher == null || startIndex >= size) { return -1; } final int len = size; final char[] buf = buffer; for (int i = startIndex; i < len; i++) { if (matcher.isMatch(buf, i, startIndex, len) > 0) { return i; } } return -1; }
Searches the string builder using the matcher to find the first match searching from the given index. <p> Matchers can be used to perform advanced searching behavior. For example you could write a matcher to find the character 'a' followed by a number. </p> @param matcher the matcher to use, null returns -1 @param startIndex the index to start at, invalid index rounded to edge @return the first index matched, or -1 if not found
java
src/main/java/org/apache/commons/lang3/text/StrBuilder.java
2,070
[ "matcher", "startIndex" ]
true
5
8.08
apache/commons-lang
2,896
javadoc
false
is_scipy_sparse
def is_scipy_sparse(arr) -> bool: """ Check whether an array-like is a scipy.sparse.spmatrix instance. Parameters ---------- arr : array-like The array-like to check. Returns ------- boolean Whether or not the array-like is a scipy.sparse.spmatrix instance. Notes ----- If scipy is not installed, this function will always return False. Examples -------- >>> from scipy.sparse import bsr_matrix >>> is_scipy_sparse(bsr_matrix([1, 2, 3])) True >>> is_scipy_sparse(pd.arrays.SparseArray([1, 2, 3])) False """ global _is_scipy_sparse if _is_scipy_sparse is None: try: from scipy.sparse import issparse as _is_scipy_sparse except ImportError: _is_scipy_sparse = lambda _: False assert _is_scipy_sparse is not None return _is_scipy_sparse(arr)
Check whether an array-like is a scipy.sparse.spmatrix instance. Parameters ---------- arr : array-like The array-like to check. Returns ------- boolean Whether or not the array-like is a scipy.sparse.spmatrix instance. Notes ----- If scipy is not installed, this function will always return False. Examples -------- >>> from scipy.sparse import bsr_matrix >>> is_scipy_sparse(bsr_matrix([1, 2, 3])) True >>> is_scipy_sparse(pd.arrays.SparseArray([1, 2, 3])) False
python
pandas/core/dtypes/common.py
250
[ "arr" ]
bool
true
2
7.84
pandas-dev/pandas
47,362
numpy
false
nextFloat
@Deprecated public static float nextFloat() { return secure().randomFloat(); }
Generates a random float between 0 (inclusive) and Float.MAX_VALUE (exclusive). @return the random float. @see #nextFloat(float, float) @since 3.5 @deprecated Use {@link #secure()}, {@link #secureStrong()}, or {@link #insecure()}.
java
src/main/java/org/apache/commons/lang3/RandomUtils.java
166
[]
true
1
6.32
apache/commons-lang
2,896
javadoc
false
startsWithArgumentClassName
private boolean startsWithArgumentClassName(String message) { Predicate<@Nullable Object> startsWith = (argument) -> startsWithArgumentClassName(message, argument); return startsWith.test(this.argument) || additionalArgumentsStartsWith(startsWith); }
Use a specific filter to determine when a callback should apply. If no explicit filter is set filter will be attempted using the generic type on the callback type. @param filter the filter to use @return this instance @since 3.4.8
java
core/spring-boot/src/main/java/org/springframework/boot/util/LambdaSafe.java
177
[ "message" ]
true
2
8.16
spring-projects/spring-boot
79,428
javadoc
false
getCacheStats
public CacheStats getCacheStats() { Cache.Stats stats = cache.stats(); return new CacheStats( cache.count(), stats.getHits(), stats.getMisses(), stats.getEvictions(), TimeValue.nsecToMSec(hitsTimeInNanos.sum()), TimeValue.nsecToMSec(missesTimeInNanos.sum()) ); }
Returns stats about this cache as of this moment. There is no guarantee that the counts reconcile (for example hits + misses = count) because no locking is performed when requesting these stats. @return Current stats about this cache
java
modules/ingest-geoip/src/main/java/org/elasticsearch/ingest/geoip/GeoIpCache.java
122
[]
CacheStats
true
1
7.04
elastic/elasticsearch
75,680
javadoc
false
_read
def _read( obj: FilePath | BaseBuffer, encoding: str | None, storage_options: StorageOptions | None, ) -> str | bytes: """ Try to read from a url, file or string. Parameters ---------- obj : str, unicode, path object, or file-like object Returns ------- raw_text : str """ try: with get_handle( obj, "r", encoding=encoding, storage_options=storage_options ) as handles: return handles.handle.read() except OSError as err: if not is_url(obj): raise FileNotFoundError( f"[Errno {errno.ENOENT}] {os.strerror(errno.ENOENT)}: {obj}" ) from err raise
Try to read from a url, file or string. Parameters ---------- obj : str, unicode, path object, or file-like object Returns ------- raw_text : str
python
pandas/io/html.py
117
[ "obj", "encoding", "storage_options" ]
str | bytes
true
2
7.04
pandas-dev/pandas
47,362
numpy
false
get
@ParametricNullness public static <T extends @Nullable Object> T get(Iterable<T> iterable, int position) { checkNotNull(iterable); return (iterable instanceof List) ? ((List<T>) iterable).get(position) : Iterators.get(iterable.iterator(), position); }
Returns the element at the specified position in an iterable. <p><b>{@code Stream} equivalent:</b> {@code stream.skip(position).findFirst().get()} (throws {@code NoSuchElementException} if out of bounds) @param position position of the element to return @return the element at the specified position in {@code iterable} @throws IndexOutOfBoundsException if {@code position} is negative or greater than or equal to the size of {@code iterable}
java
android/guava/src/com/google/common/collect/Iterables.java
776
[ "iterable", "position" ]
T
true
2
7.44
google/guava
51,352
javadoc
false
deleteWhitespace
public static String deleteWhitespace(final String str) { if (isEmpty(str)) { return str; } final int sz = str.length(); final char[] chs = new char[sz]; int count = 0; for (int i = 0; i < sz; i++) { if (!Character.isWhitespace(str.charAt(i))) { chs[count++] = str.charAt(i); } } if (count == sz) { return str; } if (count == 0) { return EMPTY; } return new String(chs, 0, count); }
Deletes all whitespaces from a String as defined by {@link Character#isWhitespace(char)}. <pre> StringUtils.deleteWhitespace(null) = null StringUtils.deleteWhitespace("") = "" StringUtils.deleteWhitespace("abc") = "abc" StringUtils.deleteWhitespace(" ab c ") = "abc" </pre> @param str the String to delete whitespace from, may be null. @return the String without whitespaces, {@code null} if null String input.
java
src/main/java/org/apache/commons/lang3/StringUtils.java
1,617
[ "str" ]
String
true
6
7.76
apache/commons-lang
2,896
javadoc
false
getBestScoringError
function getBestScoringError(errors: NonUnionError[]) { return maxWithComparator(errors, (errorA, errorB) => { const aPathLength = getCombinedPathLength(errorA) const bPathLength = getCombinedPathLength(errorB) if (aPathLength !== bPathLength) { return aPathLength - bPathLength } return getErrorTypeScore(errorA) - getErrorTypeScore(errorB) }) }
Function that attempts to pick the best error from the list by ranking them. In most cases, highest ranking error would be the one which has the longest combined "selectionPath" + "argumentPath". Justification for that is that type that made it deeper into validation tree before failing is probably closer to the one user actually wanted to do. However, if two errors are at the same depth level, we introduce additional ranking based on error type. See `getErrorTypeScore` function for details @param errors @returns
typescript
packages/client/src/runtime/core/errorRendering/applyUnionError.ts
99
[ "errors" ]
false
2
7.12
prisma/prisma
44,834
jsdoc
false
find_airflow_root_path_to_operate_on
def find_airflow_root_path_to_operate_on() -> Path: """ Find the root of airflow sources we operate on. Handle the case when Breeze is installed via `pipx` or `uv tool` from a different source tree, so it searches upwards of the current directory to find the right root of airflow directory we are actually in. This **might** be different than the sources of Airflow Breeze was installed from. If not found, we operate on Airflow sources that we were installed it. This handles the case when we run Breeze from a "random" directory. This method also handles the following errors and warnings: * It fails (and exits hard) if Breeze is installed in non-editable mode (in which case it will not find the Airflow sources when walking upwards the directory where it is installed) * It warns (with 2 seconds timeout) if you are using Breeze from a different airflow sources than the one you operate on. * If we are running in the same source tree as where Breeze was installed from (so no warning above), it warns (with 2 seconds timeout) if there is a change in setup.* files of Breeze since installation time. In such case usesr is encouraged to re-install Breeze to update dependencies. :return: Path for the found sources. """ sources_root_from_env = os.getenv("AIRFLOW_ROOT_PATH", None) if sources_root_from_env: return Path(sources_root_from_env) installation_airflow_sources = get_installation_airflow_sources() if installation_airflow_sources is None and not skip_breeze_self_upgrade_check(): get_console().print( "\n[error]Breeze should only be installed with --editable flag[/]\n\n" "[warning]Please go to Airflow sources and run[/]\n\n" f" {NAME} setup self-upgrade --use-current-airflow-sources\n" '[warning]If during installation you see warning starting "Ignoring --editable install",[/]\n' '[warning]make sure you first downgrade "packaging" package to <23.2, for example by:[/]\n\n' f' pip install "packaging<23.2"\n\n' ) sys.exit(1) airflow_sources = get_used_airflow_sources() if not skip_breeze_self_upgrade_check(): # only print warning and sleep if not producing complete results reinstall_if_different_sources(airflow_sources) reinstall_if_setup_changed() os.chdir(airflow_sources.as_posix()) airflow_home_dir = Path(os.environ.get("AIRFLOW_HOME", (Path.home() / "airflow").resolve().as_posix())) if airflow_sources.resolve() == airflow_home_dir.resolve(): get_console().print( f"\n[error]Your Airflow sources are checked out in {airflow_home_dir} which " f"is your also your AIRFLOW_HOME where airflow writes logs and database. \n" f"This is a bad idea because Airflow might override and cleanup your checked out " f"sources and .git repository.[/]\n" ) get_console().print("\nPlease check out your Airflow code elsewhere.\n") sys.exit(1) return airflow_sources
Find the root of airflow sources we operate on. Handle the case when Breeze is installed via `pipx` or `uv tool` from a different source tree, so it searches upwards of the current directory to find the right root of airflow directory we are actually in. This **might** be different than the sources of Airflow Breeze was installed from. If not found, we operate on Airflow sources that we were installed it. This handles the case when we run Breeze from a "random" directory. This method also handles the following errors and warnings: * It fails (and exits hard) if Breeze is installed in non-editable mode (in which case it will not find the Airflow sources when walking upwards the directory where it is installed) * It warns (with 2 seconds timeout) if you are using Breeze from a different airflow sources than the one you operate on. * If we are running in the same source tree as where Breeze was installed from (so no warning above), it warns (with 2 seconds timeout) if there is a change in setup.* files of Breeze since installation time. In such case usesr is encouraged to re-install Breeze to update dependencies. :return: Path for the found sources.
python
dev/breeze/src/airflow_breeze/utils/path_utils.py
185
[]
Path
true
6
8.24
apache/airflow
43,597
unknown
false
connectionDelay
long connectionDelay(Node node, long now);
Return the number of milliseconds to wait, based on the connection state, before attempting to send data. When disconnected, this respects the reconnect backoff time. When connecting or connected, this handles slow/stalled connections. @param node The node to check @param now The current timestamp @return The number of milliseconds to wait.
java
clients/src/main/java/org/apache/kafka/clients/KafkaClient.java
59
[ "node", "now" ]
true
1
6.8
apache/kafka
31,560
javadoc
false
setAsText
@Override public void setAsText(@Nullable String text) throws IllegalArgumentException { Properties props = new Properties(); if (text != null) { try { // Must use the ISO-8859-1 encoding because Properties.load(stream) expects it. props.load(new ByteArrayInputStream(text.getBytes(StandardCharsets.ISO_8859_1))); } catch (IOException ex) { // Should never happen. throw new IllegalArgumentException( "Failed to parse [" + text + "] into Properties", ex); } } setValue(props); }
Convert {@link String} into {@link Properties}, considering it as properties content. @param text the text to be so converted
java
spring-beans/src/main/java/org/springframework/beans/propertyeditors/PropertiesEditor.java
49
[ "text" ]
void
true
3
7.04
spring-projects/spring-framework
59,386
javadoc
false