function_name
stringlengths
1
57
function_code
stringlengths
20
4.99k
documentation
stringlengths
50
2k
language
stringclasses
5 values
file_path
stringlengths
8
166
line_number
int32
4
16.7k
parameters
listlengths
0
20
return_type
stringlengths
0
131
has_type_hints
bool
2 classes
complexity
int32
1
51
quality_score
float32
6
9.68
repo_name
stringclasses
34 values
repo_stars
int32
2.9k
242k
docstring_style
stringclasses
7 values
is_async
bool
2 classes
getReferenceForShorthandProperty
function getReferenceForShorthandProperty({ flags, valueDeclaration }: Symbol, search: Search, state: State): void { const shorthandValueSymbol = state.checker.getShorthandAssignmentValueSymbol(valueDeclaration)!; const name = valueDeclaration && getNameOfDeclaration(valueDeclaration); /* * Because in short-hand property assignment, an identifier which stored as name of the short-hand property assignment * has two meanings: property name and property value. Therefore when we do findAllReference at the position where * an identifier is declared, the language service should return the position of the variable declaration as well as * the position in short-hand property assignment excluding property accessing. However, if we do findAllReference at the * position of property accessing, the referenceEntry of such position will be handled in the first case. */ if (!(flags & SymbolFlags.Transient) && name && search.includes(shorthandValueSymbol)) { addReference(name, shorthandValueSymbol, state); } }
Search within node "container" for references for a search value, where the search value is defined as a tuple of(searchSymbol, searchText, searchLocation, and searchMeaning). searchLocation: a node where the search value
typescript
src/services/findAllReferences.ts
2,100
[ "{ flags, valueDeclaration }", "search", "state" ]
true
5
6.08
microsoft/TypeScript
107,154
jsdoc
false
checkUnauthorizedTopics
private void checkUnauthorizedTopics(Cluster cluster) { if (!cluster.unauthorizedTopics().isEmpty()) { log.error("Topic authorization failed for topics {}", cluster.unauthorizedTopics()); unauthorizedTopics = new HashSet<>(cluster.unauthorizedTopics()); } }
Updates the partition-leadership info in the metadata. Update is done by merging existing metadata with the input leader information and nodes. This is called whenever partition-leadership updates are returned in a response from broker(ex - ProduceResponse & FetchResponse). Note that the updates via Metadata RPC are handled separately in ({@link #update}). Both partitionLeader and leaderNodes override the existing metadata. Non-overlapping metadata is kept as it is. @param partitionLeaders map of new leadership information for partitions. @param leaderNodes a list of nodes for leaders in the above map. @return a set of partitions, for which leaders were updated.
java
clients/src/main/java/org/apache/kafka/clients/Metadata.java
476
[ "cluster" ]
void
true
2
7.76
apache/kafka
31,560
javadoc
false
fit
def fit(self, X, y): """Fit a semi-supervised label propagation model to X. The input samples (labeled and unlabeled) are provided by matrix X, and target labels are provided by matrix y. We conventionally apply the label -1 to unlabeled samples in matrix y in a semi-supervised classification. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data, where `n_samples` is the number of samples and `n_features` is the number of features. y : array-like of shape (n_samples,) Target class values with unlabeled points marked as -1. All unlabeled samples will be transductively assigned labels internally, which are stored in `transduction_`. Returns ------- self : object Returns the instance itself. """ X, y = validate_data( self, X, y, accept_sparse=["csr", "csc"], reset=True, ) self.X_ = X check_classification_targets(y) # actual graph construction (implementations should override this) graph_matrix = self._build_graph() # label construction # construct a categorical distribution for classification only classes = np.unique(y) classes = classes[classes != -1] self.classes_ = classes n_samples, n_classes = len(y), len(classes) y = np.asarray(y) unlabeled = y == -1 # initialize distributions self.label_distributions_ = np.zeros((n_samples, n_classes)) for label in classes: self.label_distributions_[y == label, classes == label] = 1 y_static = np.copy(self.label_distributions_) if self._variant == "propagation": # LabelPropagation y_static[unlabeled] = 0 else: # LabelSpreading y_static *= 1 - self.alpha l_previous = np.zeros((self.X_.shape[0], n_classes)) unlabeled = unlabeled[:, np.newaxis] if sparse.issparse(graph_matrix): graph_matrix = graph_matrix.tocsr() for self.n_iter_ in range(self.max_iter): if np.abs(self.label_distributions_ - l_previous).sum() < self.tol: break l_previous = self.label_distributions_ self.label_distributions_ = safe_sparse_dot( graph_matrix, self.label_distributions_ ) if self._variant == "propagation": normalizer = np.sum(self.label_distributions_, axis=1)[:, np.newaxis] normalizer[normalizer == 0] = 1 self.label_distributions_ /= normalizer self.label_distributions_ = np.where( unlabeled, self.label_distributions_, y_static ) else: # clamp self.label_distributions_ = ( np.multiply(self.alpha, self.label_distributions_) + y_static ) else: warnings.warn( "max_iter=%d was reached without convergence." % self.max_iter, category=ConvergenceWarning, ) self.n_iter_ += 1 normalizer = np.sum(self.label_distributions_, axis=1)[:, np.newaxis] normalizer[normalizer == 0] = 1 self.label_distributions_ /= normalizer # set the transduction item transduction = self.classes_[np.argmax(self.label_distributions_, axis=1)] self.transduction_ = transduction.ravel() return self
Fit a semi-supervised label propagation model to X. The input samples (labeled and unlabeled) are provided by matrix X, and target labels are provided by matrix y. We conventionally apply the label -1 to unlabeled samples in matrix y in a semi-supervised classification. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data, where `n_samples` is the number of samples and `n_features` is the number of features. y : array-like of shape (n_samples,) Target class values with unlabeled points marked as -1. All unlabeled samples will be transductively assigned labels internally, which are stored in `transduction_`. Returns ------- self : object Returns the instance itself.
python
sklearn/semi_supervised/_label_propagation.py
235
[ "self", "X", "y" ]
false
10
6
scikit-learn/scikit-learn
64,340
numpy
false
getBlockIndent
function getBlockIndent(sourceFile: SourceFile, position: number, options: EditorSettings): number { // move backwards until we find a line with a non-whitespace character, // then find the first non-whitespace character for that line. let current = position; while (current > 0) { const char = sourceFile.text.charCodeAt(current); if (!isWhiteSpaceLike(char)) { break; } current--; } const lineStart = getLineStartPositionForPosition(current, sourceFile); return findFirstNonWhitespaceColumn(lineStart, current, sourceFile, options); }
@param assumeNewLineBeforeCloseBrace `false` when called on text from a real source file. `true` when we need to assume `position` is on a newline. This is useful for codefixes. Consider ``` function f() { |} ``` with `position` at `|`. When inserting some text after an open brace, we would like to get indentation as if a newline was already there. By default indentation at `position` will be 0 so 'assumeNewLineBeforeCloseBrace' overrides this behavior.
typescript
src/services/formatting/smartIndenter.ts
182
[ "sourceFile", "position", "options" ]
true
3
8.48
microsoft/TypeScript
107,154
jsdoc
false
toPrimitive
public static float[] toPrimitive(final Float[] array) { if (array == null) { return null; } if (array.length == 0) { return EMPTY_FLOAT_ARRAY; } final float[] result = new float[array.length]; for (int i = 0; i < array.length; i++) { result[i] = array[i].floatValue(); } return result; }
Converts an array of object Floats to primitives. <p> This method returns {@code null} for a {@code null} input array. </p> @param array a {@link Float} array, may be {@code null}. @return a {@code float} array, {@code null} if null array input. @throws NullPointerException if an array element is {@code null}.
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
9,006
[ "array" ]
true
4
8.08
apache/commons-lang
2,896
javadoc
false
set
def set( cls, key: str, value: Any, *, dag_id: str, task_id: str, run_id: str, map_index: int = -1, serialize: bool = True, session: Session = NEW_SESSION, ) -> None: """ Store an XCom value. :param key: Key to store the XCom. :param value: XCom value to store. :param dag_id: DAG ID. :param task_id: Task ID. :param run_id: DAG run ID for the task. :param map_index: Optional map index to assign XCom for a mapped task. :param serialize: Optional parameter to specify if value should be serialized or not. The default is ``True``. :param session: Database session. If not given, a new session will be created for this function. """ from airflow.models.dagrun import DagRun if not key: raise ValueError(f"XCom key must be a non-empty string. Received: {key!r}") if not run_id: raise ValueError(f"run_id must be passed. Passed run_id={run_id}") dag_run_id = session.scalar(select(DagRun.id).where(DagRun.dag_id == dag_id, DagRun.run_id == run_id)) if dag_run_id is None: raise ValueError(f"DAG run not found on DAG {dag_id!r} with ID {run_id!r}") # Seamlessly resolve LazySelectSequence to a list. This intends to work # as a "lazy list" to avoid pulling a ton of XComs unnecessarily, but if # it's pushed into XCom, the user should be aware of the performance # implications, and this avoids leaking the implementation detail. if isinstance(value, LazySelectSequence): warning_message = ( "Coercing mapped lazy proxy %s from task %s (DAG %s, run %s) " "to list, which may degrade performance. Review resource " "requirements for this operation, and call list() to suppress " "this message. See Dynamic Task Mapping documentation for " "more information about lazy proxy objects." ) log.warning( warning_message, "return value" if key == XCOM_RETURN_KEY else f"value {key}", task_id, dag_id, run_id, ) value = list(value) if serialize: value = cls.serialize_value( value=value, key=key, task_id=task_id, dag_id=dag_id, run_id=run_id, map_index=map_index, ) # Remove duplicate XComs and insert a new one. session.execute( delete(cls).where( cls.key == key, cls.run_id == run_id, cls.task_id == task_id, cls.dag_id == dag_id, cls.map_index == map_index, ) ) new = cls( dag_run_id=dag_run_id, key=key, value=value, run_id=run_id, task_id=task_id, dag_id=dag_id, map_index=map_index, ) session.add(new) session.flush()
Store an XCom value. :param key: Key to store the XCom. :param value: XCom value to store. :param dag_id: DAG ID. :param task_id: Task ID. :param run_id: DAG run ID for the task. :param map_index: Optional map index to assign XCom for a mapped task. :param serialize: Optional parameter to specify if value should be serialized or not. The default is ``True``. :param session: Database session. If not given, a new session will be created for this function.
python
airflow-core/src/airflow/models/xcom.py
161
[ "cls", "key", "value", "dag_id", "task_id", "run_id", "map_index", "serialize", "session" ]
None
true
7
6.8
apache/airflow
43,597
sphinx
false
parseUnsignedInt
@CanIgnoreReturnValue public static int parseUnsignedInt(String string, int radix) { checkNotNull(string); long result = Long.parseLong(string, radix); if ((result & INT_MASK) != result) { throw new NumberFormatException( "Input " + string + " in base " + radix + " is not in the range of an unsigned integer"); } return (int) result; }
Returns the unsigned {@code int} value represented by a string with the given radix. <p><b>Java 8+ users:</b> use {@link Integer#parseUnsignedInt(String, int)} instead. @param string the string containing the unsigned integer representation to be parsed. @param radix the radix to use while parsing {@code s}; must be between {@link Character#MIN_RADIX} and {@link Character#MAX_RADIX}. @throws NumberFormatException if the string does not contain a valid unsigned {@code int}, or if supplied radix is invalid. @throws NullPointerException if {@code s} is null (in contrast to {@link Integer#parseInt(String)})
java
android/guava/src/com/google/common/primitives/UnsignedInts.java
360
[ "string", "radix" ]
true
2
6.4
google/guava
51,352
javadoc
false
use
def use(self, styles: dict[str, Any]) -> Styler: """ Set the styles on the current Styler. Possibly uses styles from ``Styler.export``. Parameters ---------- styles : dict(str, Any) List of attributes to add to Styler. Dict keys should contain only: - "apply": list of styler functions, typically added with ``apply`` or ``map``. - "table_attributes": HTML attributes, typically added with ``set_table_attributes``. - "table_styles": CSS selectors and properties, typically added with ``set_table_styles``. - "hide_index": whether the index is hidden, typically added with ``hide_index``, or a boolean list for hidden levels. - "hide_columns": whether column headers are hidden, typically added with ``hide_columns``, or a boolean list for hidden levels. - "hide_index_names": whether index names are hidden. - "hide_column_names": whether column header names are hidden. - "css": the css class names used. Returns ------- Styler Instance of class with defined styler attributes added. See Also -------- Styler.export : Export the non data dependent attributes to the current Styler. Examples -------- >>> styler = pd.DataFrame([[1, 2], [3, 4]]).style >>> styler2 = pd.DataFrame([[9, 9, 9]]).style >>> styler.hide(axis=0).highlight_max(axis=1) # doctest: +SKIP >>> export = styler.export() >>> styler2.use(export) # doctest: +SKIP """ self._todo.extend(styles.get("apply", [])) table_attributes: str = self.table_attributes or "" obj_table_atts: str = ( "" if styles.get("table_attributes") is None else str(styles.get("table_attributes")) ) self.set_table_attributes((table_attributes + " " + obj_table_atts).strip()) if styles.get("table_styles"): self.set_table_styles(styles.get("table_styles"), overwrite=False) for obj in ["index", "columns"]: hide_obj = styles.get("hide_" + obj) if hide_obj is not None: if isinstance(hide_obj, bool): n = getattr(self, obj).nlevels setattr(self, "hide_" + obj + "_", [hide_obj] * n) else: setattr(self, "hide_" + obj + "_", hide_obj) self.hide_index_names = styles.get("hide_index_names", False) self.hide_column_names = styles.get("hide_column_names", False) if styles.get("css"): self.css = styles.get("css") # type: ignore[assignment] return self
Set the styles on the current Styler. Possibly uses styles from ``Styler.export``. Parameters ---------- styles : dict(str, Any) List of attributes to add to Styler. Dict keys should contain only: - "apply": list of styler functions, typically added with ``apply`` or ``map``. - "table_attributes": HTML attributes, typically added with ``set_table_attributes``. - "table_styles": CSS selectors and properties, typically added with ``set_table_styles``. - "hide_index": whether the index is hidden, typically added with ``hide_index``, or a boolean list for hidden levels. - "hide_columns": whether column headers are hidden, typically added with ``hide_columns``, or a boolean list for hidden levels. - "hide_index_names": whether index names are hidden. - "hide_column_names": whether column header names are hidden. - "css": the css class names used. Returns ------- Styler Instance of class with defined styler attributes added. See Also -------- Styler.export : Export the non data dependent attributes to the current Styler. Examples -------- >>> styler = pd.DataFrame([[1, 2], [3, 4]]).style >>> styler2 = pd.DataFrame([[9, 9, 9]]).style >>> styler.hide(axis=0).highlight_max(axis=1) # doctest: +SKIP >>> export = styler.export() >>> styler2.use(export) # doctest: +SKIP
python
pandas/io/formats/style.py
2,288
[ "self", "styles" ]
Styler
true
9
7.92
pandas-dev/pandas
47,362
numpy
false
asPredicate
public static <I> Predicate<I> asPredicate(final FailablePredicate<I, ?> predicate) { return input -> test(predicate, input); }
Converts the given {@link FailablePredicate} into a standard {@link Predicate}. @param <I> the type used by the predicates @param predicate a {@link FailablePredicate} @return a standard {@link Predicate} @since 3.10
java
src/main/java/org/apache/commons/lang3/Functions.java
428
[ "predicate" ]
true
1
6.16
apache/commons-lang
2,896
javadoc
false
newPrototypeInstance
protected Object newPrototypeInstance() throws BeansException { if (logger.isDebugEnabled()) { logger.debug("Creating new instance of bean '" + this.targetBeanName + "'"); } return getBeanFactory().getBean(getTargetBeanName()); }
Subclasses should call this method to create a new prototype instance. @throws BeansException if bean creation failed
java
spring-aop/src/main/java/org/springframework/aop/target/AbstractPrototypeBasedTargetSource.java
65
[]
Object
true
2
6.56
spring-projects/spring-framework
59,386
javadoc
false
lastIndexOf
private static int lastIndexOf(boolean[] array, boolean target, int start, int end) { for (int i = end - 1; i >= start; i--) { if (array[i] == target) { return i; } } return -1; }
Returns the index of the last appearance of the value {@code target} in {@code array}. @param array an array of {@code boolean} values, possibly empty @param target a primitive {@code boolean} value @return the greatest index {@code i} for which {@code array[i] == target}, or {@code -1} if no such index exists.
java
android/guava/src/com/google/common/primitives/Booleans.java
217
[ "array", "target", "start", "end" ]
true
3
7.76
google/guava
51,352
javadoc
false
instantiateConfigProviders
private Map<String, ConfigProvider> instantiateConfigProviders( Map<String, String> indirectConfigs, Map<String, ?> providerConfigProperties, Predicate<String> classNameFilter ) { final String configProviders = indirectConfigs.get(CONFIG_PROVIDERS_CONFIG); if (configProviders == null || configProviders.isEmpty()) { return Collections.emptyMap(); } Map<String, String> providerMap = new HashMap<>(); for (String provider : configProviders.split(",")) { String providerClass = providerClassProperty(provider); if (indirectConfigs.containsKey(providerClass)) { String providerClassName = indirectConfigs.get(providerClass); if (classNameFilter.test(providerClassName)) { providerMap.put(provider, providerClassName); } else { throw new ConfigException(providerClassName + " is not allowed. Update System property '" + AUTOMATIC_CONFIG_PROVIDERS_PROPERTY + "' to allow " + providerClassName); } } } // Instantiate Config Providers Map<String, ConfigProvider> configProviderInstances = new HashMap<>(); for (Map.Entry<String, String> entry : providerMap.entrySet()) { try { String prefix = CONFIG_PROVIDERS_CONFIG + "." + entry.getKey() + CONFIG_PROVIDERS_PARAM; Map<String, ?> configProperties = configProviderProperties(prefix, providerConfigProperties); ConfigProvider provider = Utils.newInstance(entry.getValue(), ConfigProvider.class); provider.configure(configProperties); configProviderInstances.put(entry.getKey(), provider); } catch (ClassNotFoundException e) { log.error("Could not load config provider class {}", entry.getValue(), e); throw new ConfigException(providerClassProperty(entry.getKey()), entry.getValue(), "Could not load config provider class or one of its dependencies"); } } return configProviderInstances; }
Instantiates and configures the ConfigProviders. The config providers configs are defined as follows: config.providers : A comma-separated list of names for providers. config.providers.{name}.class : The Java class name for a provider. config.providers.{name}.param.{param-name} : A parameter to be passed to the above Java class on initialization. returns a map of config provider name and its instance. @param indirectConfigs The map of potential variable configs @param providerConfigProperties The map of config provider configs @param classNameFilter Filter for config provider class names @return map of config provider name and its instance.
java
clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java
601
[ "indirectConfigs", "providerConfigProperties", "classNameFilter" ]
true
6
7.6
apache/kafka
31,560
javadoc
false
getdomain
def getdomain(x): """ Return a domain suitable for given abscissae. Find a domain suitable for a polynomial or Chebyshev series defined at the values supplied. Parameters ---------- x : array_like 1-d array of abscissae whose domain will be determined. Returns ------- domain : ndarray 1-d array containing two values. If the inputs are complex, then the two returned points are the lower left and upper right corners of the smallest rectangle (aligned with the axes) in the complex plane containing the points `x`. If the inputs are real, then the two points are the ends of the smallest interval containing the points `x`. See Also -------- mapparms, mapdomain Examples -------- >>> import numpy as np >>> from numpy.polynomial import polyutils as pu >>> points = np.arange(4)**2 - 5; points array([-5, -4, -1, 4]) >>> pu.getdomain(points) array([-5., 4.]) >>> c = np.exp(complex(0,1)*np.pi*np.arange(12)/6) # unit circle >>> pu.getdomain(c) array([-1.-1.j, 1.+1.j]) """ [x] = as_series([x], trim=False) if x.dtype.char in np.typecodes['Complex']: rmin, rmax = x.real.min(), x.real.max() imin, imax = x.imag.min(), x.imag.max() return np.array((complex(rmin, imin), complex(rmax, imax))) else: return np.array((x.min(), x.max()))
Return a domain suitable for given abscissae. Find a domain suitable for a polynomial or Chebyshev series defined at the values supplied. Parameters ---------- x : array_like 1-d array of abscissae whose domain will be determined. Returns ------- domain : ndarray 1-d array containing two values. If the inputs are complex, then the two returned points are the lower left and upper right corners of the smallest rectangle (aligned with the axes) in the complex plane containing the points `x`. If the inputs are real, then the two points are the ends of the smallest interval containing the points `x`. See Also -------- mapparms, mapdomain Examples -------- >>> import numpy as np >>> from numpy.polynomial import polyutils as pu >>> points = np.arange(4)**2 - 5; points array([-5, -4, -1, 4]) >>> pu.getdomain(points) array([-5., 4.]) >>> c = np.exp(complex(0,1)*np.pi*np.arange(12)/6) # unit circle >>> pu.getdomain(c) array([-1.-1.j, 1.+1.j])
python
numpy/polynomial/polyutils.py
194
[ "x" ]
false
3
7.52
numpy/numpy
31,054
numpy
false
entryIterator
@Override Iterator<Entry<Cut<C>, Range<C>>> entryIterator() { /* * We want to start the iteration at the first range where the upper bound is in * upperBoundWindow. */ Iterator<Range<C>> backingItr; if (!upperBoundWindow.hasLowerBound()) { backingItr = rangesByLowerBound.values().iterator(); } else { Entry<Cut<C>, Range<C>> lowerEntry = rangesByLowerBound.lowerEntry(upperBoundWindow.lowerEndpoint()); if (lowerEntry == null) { backingItr = rangesByLowerBound.values().iterator(); } else if (upperBoundWindow.lowerBound.isLessThan(lowerEntry.getValue().upperBound)) { backingItr = rangesByLowerBound.tailMap(lowerEntry.getKey(), true).values().iterator(); } else { backingItr = rangesByLowerBound .tailMap(upperBoundWindow.lowerEndpoint(), true) .values() .iterator(); } } return new AbstractIterator<Entry<Cut<C>, Range<C>>>() { @Override protected @Nullable Entry<Cut<C>, Range<C>> computeNext() { if (!backingItr.hasNext()) { return endOfData(); } Range<C> range = backingItr.next(); if (upperBoundWindow.upperBound.isLessThan(range.upperBound)) { return endOfData(); } else { return immutableEntry(range.upperBound, range); } } }; }
upperBoundWindow represents the headMap/subMap/tailMap view of the entire "ranges by upper bound" map; it's a constraint on the *keys*, and does not affect the values.
java
android/guava/src/com/google/common/collect/TreeRangeSet.java
363
[]
true
6
6.72
google/guava
51,352
javadoc
false
processRetryLogic
private void processRetryLogic(AcknowledgeRequestState acknowledgeRequestState, AtomicBoolean shouldRetry, long responseCompletionTimeMs) { if (shouldRetry.get()) { acknowledgeRequestState.onFailedAttempt(responseCompletionTimeMs); // Check for any acknowledgements that did not receive a response. // These acknowledgements are failed with InvalidRecordStateException. acknowledgeRequestState.processPendingInFlightAcknowledgements(new InvalidRecordStateException(INVALID_RESPONSE)); } else { acknowledgeRequestState.onSuccessfulAttempt(responseCompletionTimeMs); acknowledgeRequestState.processingComplete(); } }
The method checks whether the leader for a topicIdPartition has changed. @param nodeId The previous leader for the partition. @param topicIdPartition The TopicIdPartition to check. @return Returns true if leader information is available and leader has changed. If the leader information is not available or if the leader has not changed, it returns false.
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ShareConsumeRequestManager.java
1,082
[ "acknowledgeRequestState", "shouldRetry", "responseCompletionTimeMs" ]
void
true
2
8.08
apache/kafka
31,560
javadoc
false
setSubscriptionType
private void setSubscriptionType(SubscriptionType type) { if (this.subscriptionType == SubscriptionType.NONE) this.subscriptionType = type; else if (this.subscriptionType != type) throw new IllegalStateException(SUBSCRIPTION_EXCEPTION_MESSAGE); }
This method sets the subscription type if it is not already set (i.e. when it is NONE), or verifies that the subscription type is equal to the give type when it is set (i.e. when it is not NONE) @param type The given subscription type
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
185
[ "type" ]
void
true
3
7.04
apache/kafka
31,560
javadoc
false
emitWorker
function emitWorker(code: OpCode, args?: OperationArguments, location?: TextRange): void { if (operations === undefined) { operations = []; operationArguments = []; operationLocations = []; } if (labelOffsets === undefined) { // mark entry point markLabel(defineLabel()); } const operationIndex = operations.length; operations[operationIndex] = code; operationArguments![operationIndex] = args; operationLocations![operationIndex] = location; }
Emits an operation. @param code The OpCode for the operation. @param args The optional arguments for the operation.
typescript
src/compiler/transformers/generators.ts
2,721
[ "code", "args?", "location?" ]
true
3
6.88
microsoft/TypeScript
107,154
jsdoc
false
writeYield
function writeYield(expression: Expression, operationLocation: TextRange | undefined): void { lastOperationWasAbrupt = true; writeStatement( setEmitFlags( setTextRange( factory.createReturnStatement( factory.createArrayLiteralExpression( expression ? [createInstruction(Instruction.Yield), expression] : [createInstruction(Instruction.Yield)], ), ), operationLocation, ), EmitFlags.NoTokenSourceMaps, ), ); }
Writes a Yield operation to the current label's statement list. @param expression The expression to yield. @param operationLocation The source map location for the operation.
typescript
src/compiler/transformers/generators.ts
3,228
[ "expression", "operationLocation" ]
true
2
6.4
microsoft/TypeScript
107,154
jsdoc
false
acquirePermit
private boolean acquirePermit() { if (getLimit() <= NO_LIMIT || acquireCount < getLimit()) { acquireCount++; return true; } return false; }
Internal helper method for acquiring a permit. This method checks whether currently a permit can be acquired and - if so - increases the internal counter. The return value indicates whether a permit could be acquired. This method must be called with the lock of this object held. @return a flag whether a permit could be acquired.
java
src/main/java/org/apache/commons/lang3/concurrent/TimedSemaphore.java
310
[]
true
3
8.24
apache/commons-lang
2,896
javadoc
false
get_fieldstructure
def get_fieldstructure(adtype, lastname=None, parents=None,): """ Returns a dictionary with fields indexing lists of their parent fields. This function is used to simplify access to fields nested in other fields. Parameters ---------- adtype : np.dtype Input datatype lastname : optional Last processed field name (used internally during recursion). parents : dictionary Dictionary of parent fields (used internally during recursion). Examples -------- >>> import numpy as np >>> from numpy.lib import recfunctions as rfn >>> ndtype = np.dtype([('A', int), ... ('B', [('BA', int), ... ('BB', [('BBA', int), ('BBB', int)])])]) >>> rfn.get_fieldstructure(ndtype) ... # XXX: possible regression, order of BBA and BBB is swapped {'A': [], 'B': [], 'BA': ['B'], 'BB': ['B'], 'BBA': ['B', 'BB'], 'BBB': ['B', 'BB']} """ if parents is None: parents = {} names = adtype.names for name in names: current = adtype[name] if current.names is not None: if lastname: parents[name] = [lastname, ] else: parents[name] = [] parents.update(get_fieldstructure(current, name, parents)) else: lastparent = list(parents.get(lastname, []) or []) if lastparent: lastparent.append(lastname) elif lastname: lastparent = [lastname, ] parents[name] = lastparent or [] return parents
Returns a dictionary with fields indexing lists of their parent fields. This function is used to simplify access to fields nested in other fields. Parameters ---------- adtype : np.dtype Input datatype lastname : optional Last processed field name (used internally during recursion). parents : dictionary Dictionary of parent fields (used internally during recursion). Examples -------- >>> import numpy as np >>> from numpy.lib import recfunctions as rfn >>> ndtype = np.dtype([('A', int), ... ('B', [('BA', int), ... ('BB', [('BBA', int), ('BBB', int)])])]) >>> rfn.get_fieldstructure(ndtype) ... # XXX: possible regression, order of BBA and BBB is swapped {'A': [], 'B': [], 'BA': ['B'], 'BB': ['B'], 'BBA': ['B', 'BB'], 'BBB': ['B', 'BB']}
python
numpy/lib/recfunctions.py
226
[ "adtype", "lastname", "parents" ]
false
11
7.52
numpy/numpy
31,054
numpy
false
register
private void register(Map<String, PropertyDescriptor> candidates, PropertyDescriptor descriptor) { if (!candidates.containsKey(descriptor.getName()) && isCandidate(descriptor)) { candidates.put(descriptor.getName(), descriptor); } }
Return the {@link PropertyDescriptor} instances that are valid candidates for the specified {@link TypeElement type} based on the specified {@link ExecutableElement factory method}, if any. @param type the target type @param factoryMethod the method that triggered the metadata for that {@code type} or {@code null} @return the candidate properties for metadata generation
java
configuration-metadata/spring-boot-configuration-processor/src/main/java/org/springframework/boot/configurationprocessor/PropertyDescriptorResolver.java
151
[ "candidates", "descriptor" ]
void
true
3
7.28
spring-projects/spring-boot
79,428
javadoc
false
init
final void init() { /* * requireNonNull is safe because this is called from the constructor after `futures` is set but * before releaseResources could be called (because we have not yet set up any of the listeners * that could call it, nor exposed this Future for users to call cancel() on). */ requireNonNull(futures); // Corner case: List is empty. if (futures.isEmpty()) { handleAllCompleted(); return; } // NOTE: If we ever want to use a custom executor here, have a look at CombinedFuture as we'll // need to handle RejectedExecutionException if (allMustSucceed) { // We need fail fast, so we have to keep track of which future failed so we can propagate // the exception immediately // Register a listener on each Future in the list to update the state of this future. // Note that if all the futures on the list are done prior to completing this loop, the last // call to addListener() will callback to setOneValue(), transitively call our cleanup // listener, and set this.futures to null. // This is not actually a problem, since the foreach only needs this.futures to be non-null // at the beginning of the loop. int i = 0; for (ListenableFuture<? extends InputT> future : futures) { int index = i++; if (future.isDone()) { processAllMustSucceedDoneFuture(index, future); } else { future.addListener( () -> processAllMustSucceedDoneFuture(index, future), directExecutor()); } } } else { /* * We'll call the user callback or collect the values only when all inputs complete, * regardless of whether some failed. This lets us avoid calling expensive methods like * Future.get() when we don't need to (specifically, for whenAllComplete().call*()), and it * lets all futures share the same listener. * * We store `localFuturesOrNull` inside the listener because `this.futures` might be nulled * out by the time the listener runs for the final future -- at which point we need to check * all inputs for exceptions *if* we're collecting values. If we're not, then the listener * doesn't need access to the futures again, so we can just pass `null`. * * TODO(b/112550045): Allocating a single, cheaper listener is (I think) only an optimization. * If we make some other optimizations, this one will no longer be necessary. The optimization * could actually hurt in some cases, as it forces us to keep all inputs in memory until the * final input completes. */ @RetainedLocalRef ImmutableCollection<? extends ListenableFuture<? extends InputT>> localFutures = futures; ImmutableCollection<? extends Future<? extends InputT>> localFuturesOrNull = collectsValues ? localFutures : null; Runnable listener = () -> decrementCountAndMaybeComplete(localFuturesOrNull); for (ListenableFuture<? extends InputT> future : localFutures) { if (future.isDone()) { decrementCountAndMaybeComplete(localFuturesOrNull); } else { future.addListener(listener, directExecutor()); } } } }
Must be called at the end of each subclass's constructor. This method performs the "real" initialization; we can't put this in the constructor because, in the case where futures are already complete, we would not initialize the subclass before calling {@link #collectValueFromNonCancelledFuture}. As this is called after the subclass is constructed, we're guaranteed to have properly initialized the subclass.
java
android/guava/src/com/google/common/util/concurrent/AggregateFuture.java
113
[]
void
true
6
6.8
google/guava
51,352
javadoc
false
createWithExpectedSize
public static <E extends @Nullable Object> CompactHashSet<E> createWithExpectedSize( int expectedSize) { return new CompactHashSet<>(expectedSize); }
Creates a {@code CompactHashSet} instance, with a high enough "initial capacity" that it <i>should</i> hold {@code expectedSize} elements without growth. @param expectedSize the number of elements you expect to add to the returned set @return a new, empty {@code CompactHashSet} with enough capacity to hold {@code expectedSize} elements without resizing @throws IllegalArgumentException if {@code expectedSize} is negative
java
android/guava/src/com/google/common/collect/CompactHashSet.java
122
[ "expectedSize" ]
true
1
6
google/guava
51,352
javadoc
false
fatalError
public Optional<Throwable> fatalError() { return fatalError; }
Returns the current coordinator node. @return the current coordinator node.
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/CoordinatorRequestManager.java
262
[]
true
1
6.32
apache/kafka
31,560
javadoc
false
coverage_error
def coverage_error(y_true, y_score, *, sample_weight=None): """Coverage error measure. Compute how far we need to go through the ranked scores to cover all true labels. The best value is equal to the average number of labels in ``y_true`` per sample. Ties in ``y_scores`` are broken by giving maximal rank that would have been assigned to all tied values. Note: Our implementation's score is 1 greater than the one given in Tsoumakas et al., 2010. This extends it to handle the degenerate case in which an instance has 0 true labels. Read more in the :ref:`User Guide <coverage_error>`. Parameters ---------- y_true : array-like of shape (n_samples, n_labels) True binary labels in binary indicator format. y_score : array-like of shape (n_samples, n_labels) Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by "decision_function" on some classifiers). For :term:`decision_function` scores, values greater than or equal to zero should indicate the positive class. sample_weight : array-like of shape (n_samples,), default=None Sample weights. Returns ------- coverage_error : float The coverage error. References ---------- .. [1] Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667-685). Springer US. Examples -------- >>> from sklearn.metrics import coverage_error >>> y_true = [[1, 0, 0], [0, 1, 1]] >>> y_score = [[1, 0, 0], [0, 1, 1]] >>> coverage_error(y_true, y_score) 1.5 """ y_true = check_array(y_true, ensure_2d=True) y_score = check_array(y_score, ensure_2d=True) check_consistent_length(y_true, y_score, sample_weight) y_type = type_of_target(y_true, input_name="y_true") if y_type != "multilabel-indicator": raise ValueError("{0} format is not supported".format(y_type)) if y_true.shape != y_score.shape: raise ValueError("y_true and y_score have different shape") y_score_mask = np.ma.masked_array(y_score, mask=np.logical_not(y_true)) y_min_relevant = y_score_mask.min(axis=1).reshape((-1, 1)) coverage = (y_score >= y_min_relevant).sum(axis=1) coverage = coverage.filled(0) return float(np.average(coverage, weights=sample_weight))
Coverage error measure. Compute how far we need to go through the ranked scores to cover all true labels. The best value is equal to the average number of labels in ``y_true`` per sample. Ties in ``y_scores`` are broken by giving maximal rank that would have been assigned to all tied values. Note: Our implementation's score is 1 greater than the one given in Tsoumakas et al., 2010. This extends it to handle the degenerate case in which an instance has 0 true labels. Read more in the :ref:`User Guide <coverage_error>`. Parameters ---------- y_true : array-like of shape (n_samples, n_labels) True binary labels in binary indicator format. y_score : array-like of shape (n_samples, n_labels) Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by "decision_function" on some classifiers). For :term:`decision_function` scores, values greater than or equal to zero should indicate the positive class. sample_weight : array-like of shape (n_samples,), default=None Sample weights. Returns ------- coverage_error : float The coverage error. References ---------- .. [1] Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667-685). Springer US. Examples -------- >>> from sklearn.metrics import coverage_error >>> y_true = [[1, 0, 0], [0, 1, 1]] >>> y_score = [[1, 0, 0], [0, 1, 1]] >>> coverage_error(y_true, y_score) 1.5
python
sklearn/metrics/_ranking.py
1,421
[ "y_true", "y_score", "sample_weight" ]
false
3
7.44
scikit-learn/scikit-learn
64,340
numpy
false
getBeanNamesForType
@Override public String[] getBeanNamesForType(@Nullable ResolvableType type, boolean includeNonSingletons, boolean allowEagerInit) { Class<?> resolved = (type != null ? type.resolve() : null); boolean isFactoryType = (resolved != null && FactoryBean.class.isAssignableFrom(resolved)); List<String> matches = new ArrayList<>(); for (Map.Entry<String, Object> entry : this.beans.entrySet()) { String beanName = entry.getKey(); Object beanInstance = entry.getValue(); if (beanInstance instanceof FactoryBean<?> factoryBean && !isFactoryType) { if ((includeNonSingletons || factoryBean.isSingleton()) && (type == null || isTypeMatch(factoryBean, type.toClass()))) { matches.add(beanName); } } else { if (type == null || type.isInstance(beanInstance)) { matches.add(beanName); } } } return StringUtils.toStringArray(matches); }
Add a new singleton bean. <p>Will overwrite any existing instance for the given name. @param name the name of the bean @param bean the bean instance
java
spring-beans/src/main/java/org/springframework/beans/factory/support/StaticListableBeanFactory.java
371
[ "type", "includeNonSingletons", "allowEagerInit" ]
true
11
6.88
spring-projects/spring-framework
59,386
javadoc
false
readKeyStore
private KeyStore readKeyStore(Path path) { try { return KeyStoreUtil.readKeyStore(path, type, password); } catch (SecurityException e) { throw SslFileUtil.accessControlFailure(fileTypeForException(), List.of(path), e, configBasePath); } catch (IOException e) { throw SslFileUtil.ioException(fileTypeForException(), List.of(path), e, getAdditionalErrorDetails()); } catch (GeneralSecurityException e) { throw keystoreException(path, e); } }
@param path The path to the keystore file @param password The password for the keystore @param type The {@link KeyStore#getType() type} of the keystore (typically "PKCS12" or "jks"). See {@link KeyStoreUtil#inferKeyStoreType}. @param algorithm The algorithm to use for the Trust Manager (see {@link javax.net.ssl.TrustManagerFactory#getAlgorithm()}). @param requireTrustAnchors If true, the truststore will be checked to ensure that it contains at least one valid trust anchor. @param configBasePath The base path for the configuration directory
java
libs/ssl-config/src/main/java/org/elasticsearch/common/ssl/StoreTrustConfig.java
92
[ "path" ]
KeyStore
true
4
6.4
elastic/elasticsearch
75,680
javadoc
false
clamp
function clamp(number, lower, upper) { if (upper === undefined) { upper = lower; lower = undefined; } if (upper !== undefined) { upper = toNumber(upper); upper = upper === upper ? upper : 0; } if (lower !== undefined) { lower = toNumber(lower); lower = lower === lower ? lower : 0; } return baseClamp(toNumber(number), lower, upper); }
Clamps `number` within the inclusive `lower` and `upper` bounds. @static @memberOf _ @since 4.0.0 @category Number @param {number} number The number to clamp. @param {number} [lower] The lower bound. @param {number} upper The upper bound. @returns {number} Returns the clamped number. @example _.clamp(-10, -5, 5); // => -5 _.clamp(10, -5, 5); // => 5
javascript
lodash.js
14,088
[ "number", "lower", "upper" ]
false
6
7.68
lodash/lodash
61,490
jsdoc
false
tobytes
def tobytes(self, fill_value=None, order='C'): """ Return the array data as a string containing the raw bytes in the array. The array is filled with a fill value before the string conversion. Parameters ---------- fill_value : scalar, optional Value used to fill in the masked values. Default is None, in which case `MaskedArray.fill_value` is used. order : {'C','F','A'}, optional Order of the data item in the copy. Default is 'C'. - 'C' -- C order (row major). - 'F' -- Fortran order (column major). - 'A' -- Any, current order of array. - None -- Same as 'A'. See Also -------- numpy.ndarray.tobytes tolist, tofile Notes ----- As for `ndarray.tobytes`, information about the shape, dtype, etc., but also about `fill_value`, will be lost. Examples -------- >>> import numpy as np >>> x = np.ma.array(np.array([[1, 2], [3, 4]]), mask=[[0, 1], [1, 0]]) >>> x.tobytes() b'\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00?B\\x0f\\x00\\x00\\x00\\x00\\x00?B\\x0f\\x00\\x00\\x00\\x00\\x00\\x04\\x00\\x00\\x00\\x00\\x00\\x00\\x00' """ return self.filled(fill_value).tobytes(order=order)
Return the array data as a string containing the raw bytes in the array. The array is filled with a fill value before the string conversion. Parameters ---------- fill_value : scalar, optional Value used to fill in the masked values. Default is None, in which case `MaskedArray.fill_value` is used. order : {'C','F','A'}, optional Order of the data item in the copy. Default is 'C'. - 'C' -- C order (row major). - 'F' -- Fortran order (column major). - 'A' -- Any, current order of array. - None -- Same as 'A'. See Also -------- numpy.ndarray.tobytes tolist, tofile Notes ----- As for `ndarray.tobytes`, information about the shape, dtype, etc., but also about `fill_value`, will be lost. Examples -------- >>> import numpy as np >>> x = np.ma.array(np.array([[1, 2], [3, 4]]), mask=[[0, 1], [1, 0]]) >>> x.tobytes() b'\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00?B\\x0f\\x00\\x00\\x00\\x00\\x00?B\\x0f\\x00\\x00\\x00\\x00\\x00\\x04\\x00\\x00\\x00\\x00\\x00\\x00\\x00'
python
numpy/ma/core.py
6,351
[ "self", "fill_value", "order" ]
false
1
6.16
numpy/numpy
31,054
numpy
false
afterPropertiesSet
@Override public void afterPropertiesSet() throws Exception { if (this.dataSource == null && this.nonTransactionalDataSource != null) { this.dataSource = this.nonTransactionalDataSource; } if (this.applicationContext != null && this.resourceLoader == null) { this.resourceLoader = this.applicationContext; } // Initialize the Scheduler instance... this.scheduler = prepareScheduler(prepareSchedulerFactory()); try { registerListeners(); registerJobsAndTriggers(); } catch (Exception ex) { try { this.scheduler.shutdown(true); } catch (Exception ex2) { logger.debug("Scheduler shutdown exception after registration failure", ex2); } throw ex; } }
Set whether to wait for running jobs to complete on shutdown. <p>Default is "false". Switch this to "true" if you prefer fully completed jobs at the expense of a longer shutdown phase. @see org.quartz.Scheduler#shutdown(boolean)
java
spring-context-support/src/main/java/org/springframework/scheduling/quartz/SchedulerFactoryBean.java
480
[]
void
true
7
7.2
spring-projects/spring-framework
59,386
javadoc
false
_shared_cache_filepath
def _shared_cache_filepath(self) -> Path: """Get the shared cache filepath for memoizer cache dumps. Returns: The path to the shared memoizer cache JSON file. """ return Path(cache_dir()) / "memoizer_cache.json"
Get the shared cache filepath for memoizer cache dumps. Returns: The path to the shared memoizer cache JSON file.
python
torch/_inductor/runtime/caching/interfaces.py
271
[ "self" ]
Path
true
1
6.56
pytorch/pytorch
96,034
unknown
false
toString
@Override public final String toString() { Runnable state = get(); String result; if (state == DONE) { result = "running=[DONE]"; } else if (state instanceof Blocker) { result = "running=[INTERRUPTED]"; } else if (state instanceof Thread) { // getName is final on Thread, no need to worry about exceptions result = "running=[RUNNING ON " + ((Thread) state).getName() + "]"; } else { result = "running=[NOT STARTED YET]"; } return result + ", " + toPendingString(); }
Using this as the blocker object allows introspection and debugging tools to see that the currentRunner thread is blocked on the progress of the interruptor thread, which can help identify deadlocks.
java
android/guava/src/com/google/common/util/concurrent/InterruptibleTask.java
251
[]
String
true
4
6.56
google/guava
51,352
javadoc
false
getFraction
public static Fraction getFraction(String str) { Objects.requireNonNull(str, "str"); // parse double format int pos = str.indexOf('.'); if (pos >= 0) { return getFraction(Double.parseDouble(str)); } // parse X Y/Z format pos = str.indexOf(' '); if (pos > 0) { final int whole = Integer.parseInt(str.substring(0, pos)); str = str.substring(pos + 1); pos = str.indexOf('/'); if (pos < 0) { throw new NumberFormatException("The fraction could not be parsed as the format X Y/Z"); } final int numer = Integer.parseInt(str.substring(0, pos)); final int denom = Integer.parseInt(str.substring(pos + 1)); return getFraction(whole, numer, denom); } // parse Y/Z format pos = str.indexOf('/'); if (pos < 0) { // simple whole number return getFraction(Integer.parseInt(str), 1); } final int numer = Integer.parseInt(str.substring(0, pos)); final int denom = Integer.parseInt(str.substring(pos + 1)); return getFraction(numer, denom); }
Creates a Fraction from a {@link String}. <p> The formats accepted are: </p> <ol> <li>{@code double} String containing a dot</li> <li>'X Y/Z'</li> <li>'Y/Z'</li> <li>'X' (a simple whole number)</li> </ol> <p> and a . </p> @param str the string to parse, must not be {@code null} @return the new {@link Fraction} instance @throws NullPointerException if the string is {@code null} @throws NumberFormatException if the number format is invalid
java
src/main/java/org/apache/commons/lang3/math/Fraction.java
264
[ "str" ]
Fraction
true
5
8.08
apache/commons-lang
2,896
javadoc
false
failableStream
public static <T> FailableStream<T> failableStream(final Collection<T> stream) { return failableStream(of(stream)); }
Converts the given {@link Collection} into a {@link FailableStream}. This is basically a simplified, reduced version of the {@link Stream} class, with the same underlying element stream, except that failable objects, like {@link FailablePredicate}, {@link FailableFunction}, or {@link FailableConsumer} may be applied, instead of {@link Predicate}, {@link Function}, or {@link Consumer}. The idea is to rewrite a code snippet like this: <pre> {@code final List<O> list; final Method m; final Function<O, String> mapper = (o) -> { try { return (String) m.invoke(o); } catch (Throwable t) { throw Failable.rethrow(t); } }; final List<String> strList = list.stream().map(mapper).collect(Collectors.toList()); } </pre> as follows: <pre> {@code final List<O> list; final Method m; final List<String> strList = Failable.stream(list.stream()).map((o) -> (String) m.invoke(o)).collect(Collectors.toList()); } </pre> While the second version may not be <em>quite</em> as efficient (because it depends on the creation of additional, intermediate objects, of type FailableStream), it is much more concise, and readable, and meets the spirit of Lambdas better than the first version. @param <T> The streams element type. @param stream The stream, which is being converted. @return The {@link FailableStream}, which has been created by converting the stream. @since 3.13.0
java
src/main/java/org/apache/commons/lang3/stream/Streams.java
521
[ "stream" ]
true
1
6.32
apache/commons-lang
2,896
javadoc
false
resolveItemDeprecation
protected final ItemDeprecation resolveItemDeprecation(MetadataGenerationEnvironment environment, Element... elements) { boolean deprecated = Arrays.stream(elements).anyMatch(environment::isDeprecated); return deprecated ? environment.resolveItemDeprecation(getGetter()) : null; }
Return if this property has been explicitly marked as nested (for example using an annotation}. @param environment the metadata generation environment @return if the property has been marked as nested
java
configuration-metadata/spring-boot-configuration-processor/src/main/java/org/springframework/boot/configurationprocessor/PropertyDescriptor.java
199
[ "environment" ]
ItemDeprecation
true
2
7.92
spring-projects/spring-boot
79,428
javadoc
false
authenticationException
public AuthenticationException authenticationException(String id) { NodeConnectionState state = nodeState.get(id); return state != null ? state.authenticationException : null; }
Return authentication exception if an authentication error occurred @param id The id of the node to check
java
clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java
331
[ "id" ]
AuthenticationException
true
2
6.32
apache/kafka
31,560
javadoc
false
getCause
@Deprecated public static Throwable getCause(final Throwable throwable, String[] methodNames) { if (throwable == null) { return null; } if (methodNames == null) { final Throwable cause = throwable.getCause(); if (cause != null) { return cause; } methodNames = CAUSE_METHOD_NAMES; } return Stream.of(methodNames).map(m -> getCauseUsingMethodName(throwable, m)).filter(Objects::nonNull).findFirst().orElse(null); }
Introspects the {@link Throwable} to obtain the cause. <p> A {@code null} set of method names means use the default set. A {@code null} in the set of method names will be ignored. </p> @param throwable the throwable to introspect for a cause, may be null. @param methodNames the method names, null treated as default set. @return the cause of the {@link Throwable}, {@code null} if none found or null throwable input. @since 1.0 @deprecated This feature will be removed in Lang 4, use {@link Throwable#getCause} instead.
java
src/main/java/org/apache/commons/lang3/exception/ExceptionUtils.java
230
[ "throwable", "methodNames" ]
Throwable
true
4
7.92
apache/commons-lang
2,896
javadoc
false
create
public static NodeApiVersions create(short apiKey, short minVersion, short maxVersion) { return create(Collections.singleton(new ApiVersion() .setApiKey(apiKey) .setMinVersion(minVersion) .setMaxVersion(maxVersion))); }
Create a NodeApiVersions object with a single ApiKey. It is mainly used in tests. @param apiKey ApiKey's id. @param minVersion ApiKey's minimum version. @param maxVersion ApiKey's maximum version. @return A new NodeApiVersions object.
java
clients/src/main/java/org/apache/kafka/clients/NodeApiVersions.java
96
[ "apiKey", "minVersion", "maxVersion" ]
NodeApiVersions
true
1
6.56
apache/kafka
31,560
javadoc
false
bracket_category_matcher
def bracket_category_matcher(title: str): """Categorize a commit based on the presence of a bracketed category in the title. Args: title (str): title to seaarch Returns: optional[str] """ pairs = [ ("[dynamo]", "dynamo"), ("[torchdynamo]", "dynamo"), ("[torchinductor]", "inductor"), ("[inductor]", "inductor"), ("[codemod", "skip"), ("[profiler]", "profiler"), ("[functorch]", "functorch"), ("[autograd]", "autograd_frontend"), ("[quantization]", "quantization"), ("[nn]", "nn_frontend"), ("[complex]", "complex_frontend"), ("[mps]", "mps"), ("[optimizer]", "optimizer_frontend"), ("[xla]", "xla"), ] title_lower = title.lower() for bracket, category in pairs: if bracket in title_lower: return category return None
Categorize a commit based on the presence of a bracketed category in the title. Args: title (str): title to seaarch Returns: optional[str]
python
scripts/release_notes/commitlist.py
157
[ "title" ]
true
3
7.76
pytorch/pytorch
96,034
google
false
_fill
def _fill(self, direction: Literal["ffill", "bfill"], limit: int | None = None): """ Shared function for `pad` and `backfill` to call Cython method. Parameters ---------- direction : {'ffill', 'bfill'} Direction passed to underlying Cython function. `bfill` will cause values to be filled backwards. `ffill` and any other values will default to a forward fill limit : int, default None Maximum number of consecutive values to fill. If `None`, this method will convert to -1 prior to passing to Cython Returns ------- `Series` or `DataFrame` with filled values See Also -------- pad : Returns Series with minimum number of char in object. backfill : Backward fill the missing values in the dataset. """ # Need int value for Cython if limit is None: limit = -1 ids = self._grouper.ids ngroups = self._grouper.ngroups col_func = partial( libgroupby.group_fillna_indexer, labels=ids, limit=limit, compute_ffill=(direction == "ffill"), ngroups=ngroups, ) def blk_func(values: ArrayLike) -> ArrayLike: mask = isna(values) if values.ndim == 1: indexer = np.empty(values.shape, dtype=np.intp) col_func(out=indexer, mask=mask) # type: ignore[arg-type] return algorithms.take_nd(values, indexer) else: # We broadcast algorithms.take_nd analogous to # np.take_along_axis if isinstance(values, np.ndarray): dtype = values.dtype if self._grouper.has_dropped_na: # dropped null groups give rise to nan in the result dtype = ensure_dtype_can_hold_na(values.dtype) out = np.empty(values.shape, dtype=dtype) else: # Note: we only get here with backfill/pad, # so if we have a dtype that cannot hold NAs, # then there will be no -1s in indexer, so we can use # the original dtype (no need to ensure_dtype_can_hold_na) out = type(values)._empty(values.shape, dtype=values.dtype) for i, value_element in enumerate(values): # call group_fillna_indexer column-wise indexer = np.empty(values.shape[1], dtype=np.intp) col_func(out=indexer, mask=mask[i]) out[i, :] = algorithms.take_nd(value_element, indexer) return out mgr = self._get_data_to_aggregate() res_mgr = mgr.apply(blk_func) new_obj = self._wrap_agged_manager(res_mgr) new_obj.index = self.obj.index return new_obj
Shared function for `pad` and `backfill` to call Cython method. Parameters ---------- direction : {'ffill', 'bfill'} Direction passed to underlying Cython function. `bfill` will cause values to be filled backwards. `ffill` and any other values will default to a forward fill limit : int, default None Maximum number of consecutive values to fill. If `None`, this method will convert to -1 prior to passing to Cython Returns ------- `Series` or `DataFrame` with filled values See Also -------- pad : Returns Series with minimum number of char in object. backfill : Backward fill the missing values in the dataset.
python
pandas/core/groupby/groupby.py
4,013
[ "self", "direction", "limit" ]
true
8
6.96
pandas-dev/pandas
47,362
numpy
false
min
public static short min(short a, final short b, final short c) { if (b < a) { a = b; } if (c < a) { a = c; } return a; }
Gets the minimum of three {@code short} values. @param a value 1. @param b value 2. @param c value 3. @return the smallest of the values.
java
src/main/java/org/apache/commons/lang3/math/NumberUtils.java
1,331
[ "a", "b", "c" ]
true
3
8.24
apache/commons-lang
2,896
javadoc
false
quoteMatcher
public static StrMatcher quoteMatcher() { return QUOTE_MATCHER; }
Gets the matcher for the single or double quote character. @return the matcher for a single or double quote.
java
src/main/java/org/apache/commons/lang3/text/StrMatcher.java
313
[]
StrMatcher
true
1
6.96
apache/commons-lang
2,896
javadoc
false
sendListOffsetsRequestsAndResetPositions
private CompletableFuture<Void> sendListOffsetsRequestsAndResetPositions( final Map<TopicPartition, AutoOffsetResetStrategy> partitionAutoOffsetResetStrategyMap) { Map<TopicPartition, Long> timestampsToSearch = partitionAutoOffsetResetStrategyMap.entrySet().stream() .collect(Collectors.toMap(Map.Entry::getKey, e -> e.getValue().timestamp().get())); Map<Node, Map<TopicPartition, ListOffsetsRequestData.ListOffsetsPartition>> timestampsToSearchByNode = groupListOffsetRequests(timestampsToSearch, Optional.empty()); final AtomicInteger expectedResponses = new AtomicInteger(0); final CompletableFuture<Void> globalResult = new CompletableFuture<>(); final List<NetworkClientDelegate.UnsentRequest> unsentRequests = new ArrayList<>(); timestampsToSearchByNode.forEach((node, resetTimestamps) -> { subscriptionState.setNextAllowedRetry(resetTimestamps.keySet(), time.milliseconds() + requestTimeoutMs); CompletableFuture<ListOffsetResult> partialResult = buildListOffsetRequestToNode( node, resetTimestamps, false, unsentRequests); partialResult.whenComplete((result, error) -> { if (error == null) { offsetFetcherUtils.onSuccessfulResponseForResettingPositions(result, partitionAutoOffsetResetStrategyMap); } else { RuntimeException e; if (error instanceof RuntimeException) { e = (RuntimeException) error; } else { e = new RuntimeException("Unexpected failure in ListOffsets request for " + "resetting positions", error); } offsetFetcherUtils.onFailedResponseForResettingPositions(resetTimestamps, e); } if (expectedResponses.decrementAndGet() == 0) { globalResult.complete(null); } }); }); if (unsentRequests.isEmpty()) { globalResult.complete(null); } else { expectedResponses.set(unsentRequests.size()); requestsToSend.addAll(unsentRequests); } return globalResult; }
Make asynchronous ListOffsets request to fetch offsets by target times for the specified partitions. Use the retrieved offsets to reset positions in the subscription state. This also adds the request to the list of unsentRequests. @param partitionAutoOffsetResetStrategyMap the mapping between partitions and AutoOffsetResetStrategy @return A {@link CompletableFuture} which completes when the requests are complete.
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/OffsetsRequestManager.java
661
[ "partitionAutoOffsetResetStrategyMap" ]
true
5
7.44
apache/kafka
31,560
javadoc
false
destructuringNeedsFlattening
function destructuringNeedsFlattening(node: Expression): boolean { if (isObjectLiteralExpression(node)) { for (const elem of node.properties) { switch (elem.kind) { case SyntaxKind.PropertyAssignment: if (destructuringNeedsFlattening(elem.initializer)) { return true; } break; case SyntaxKind.ShorthandPropertyAssignment: if (destructuringNeedsFlattening(elem.name)) { return true; } break; case SyntaxKind.SpreadAssignment: if (destructuringNeedsFlattening(elem.expression)) { return true; } break; case SyntaxKind.MethodDeclaration: case SyntaxKind.GetAccessor: case SyntaxKind.SetAccessor: return false; default: Debug.assertNever(elem, "Unhandled object member kind"); } } } else if (isArrayLiteralExpression(node)) { for (const elem of node.elements) { if (isSpreadElement(elem)) { if (destructuringNeedsFlattening(elem.expression)) { return true; } } else if (destructuringNeedsFlattening(elem)) { return true; } } } else if (isIdentifier(node)) { return length(getExports(node)) > (isExportName(node) ? 1 : 0); } return false; }
Visit nested elements at the top-level of a module. @param node The node to visit.
typescript
src/compiler/transformers/module/module.ts
844
[ "node" ]
true
14
6.72
microsoft/TypeScript
107,154
jsdoc
false
transpose
def transpose(a, axes=None): """ Returns an array with axes transposed. For a 1-D array, this returns an unchanged view of the original array, as a transposed vector is simply the same vector. To convert a 1-D array into a 2-D column vector, an additional dimension must be added, e.g., ``np.atleast_2d(a).T`` achieves this, as does ``a[:, np.newaxis]``. For a 2-D array, this is the standard matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided, then ``transpose(a).shape == a.shape[::-1]``. Parameters ---------- a : array_like Input array. axes : tuple or list of ints, optional If specified, it must be a tuple or list which contains a permutation of [0, 1, ..., N-1] where N is the number of axes of `a`. Negative indices can also be used to specify axes. The i-th axis of the returned array will correspond to the axis numbered ``axes[i]`` of the input. If not specified, defaults to ``range(a.ndim)[::-1]``, which reverses the order of the axes. Returns ------- p : ndarray `a` with its axes permuted. A view is returned whenever possible. See Also -------- ndarray.transpose : Equivalent method. moveaxis : Move axes of an array to new positions. argsort : Return the indices that would sort an array. Notes ----- Use ``transpose(a, argsort(axes))`` to invert the transposition of tensors when using the `axes` keyword argument. Examples -------- >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> np.transpose(a) array([[1, 3], [2, 4]]) >>> a = np.array([1, 2, 3, 4]) >>> a array([1, 2, 3, 4]) >>> np.transpose(a) array([1, 2, 3, 4]) >>> a = np.ones((1, 2, 3)) >>> np.transpose(a, (1, 0, 2)).shape (2, 1, 3) >>> a = np.ones((2, 3, 4, 5)) >>> np.transpose(a).shape (5, 4, 3, 2) >>> a = np.arange(3*4*5).reshape((3, 4, 5)) >>> np.transpose(a, (-1, 0, -2)).shape (5, 3, 4) """ return _wrapfunc(a, 'transpose', axes)
Returns an array with axes transposed. For a 1-D array, this returns an unchanged view of the original array, as a transposed vector is simply the same vector. To convert a 1-D array into a 2-D column vector, an additional dimension must be added, e.g., ``np.atleast_2d(a).T`` achieves this, as does ``a[:, np.newaxis]``. For a 2-D array, this is the standard matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided, then ``transpose(a).shape == a.shape[::-1]``. Parameters ---------- a : array_like Input array. axes : tuple or list of ints, optional If specified, it must be a tuple or list which contains a permutation of [0, 1, ..., N-1] where N is the number of axes of `a`. Negative indices can also be used to specify axes. The i-th axis of the returned array will correspond to the axis numbered ``axes[i]`` of the input. If not specified, defaults to ``range(a.ndim)[::-1]``, which reverses the order of the axes. Returns ------- p : ndarray `a` with its axes permuted. A view is returned whenever possible. See Also -------- ndarray.transpose : Equivalent method. moveaxis : Move axes of an array to new positions. argsort : Return the indices that would sort an array. Notes ----- Use ``transpose(a, argsort(axes))`` to invert the transposition of tensors when using the `axes` keyword argument. Examples -------- >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> np.transpose(a) array([[1, 3], [2, 4]]) >>> a = np.array([1, 2, 3, 4]) >>> a array([1, 2, 3, 4]) >>> np.transpose(a) array([1, 2, 3, 4]) >>> a = np.ones((1, 2, 3)) >>> np.transpose(a, (1, 0, 2)).shape (2, 1, 3) >>> a = np.ones((2, 3, 4, 5)) >>> np.transpose(a).shape (5, 4, 3, 2) >>> a = np.arange(3*4*5).reshape((3, 4, 5)) >>> np.transpose(a, (-1, 0, -2)).shape (5, 3, 4)
python
numpy/_core/fromnumeric.py
605
[ "a", "axes" ]
false
1
6.4
numpy/numpy
31,054
numpy
false
_update_version_in_provider_yaml
def _update_version_in_provider_yaml( provider_id: str, type_of_change: TypeOfChange, min_airflow_version_bump: bool = False ) -> tuple[bool, bool, str]: """ Updates provider version based on the type of change selected by the user :param type_of_change: type of change selected :param provider_id: provider package :param min_airflow_version_bump: if set, ensure that the version bump is at least feature version. :return: tuple of two bools: (with_breaking_change, maybe_with_new_features, original_text) """ provider_details = get_provider_details(provider_id) version = provider_details.versions[0] v = parse(version) with_breaking_changes = False maybe_with_new_features = False if type_of_change == TypeOfChange.BREAKING_CHANGE: v = bump_version(v, VERSION_MAJOR_INDEX) with_breaking_changes = True # we do not know, but breaking changes may also contain new features maybe_with_new_features = True elif type_of_change == TypeOfChange.FEATURE: v = bump_version(v, VERSION_MINOR_INDEX) maybe_with_new_features = True elif type_of_change == TypeOfChange.BUGFIX: v = bump_version(v, VERSION_PATCHLEVEL_INDEX) elif type_of_change == TypeOfChange.MISC: v = bump_version(v, VERSION_PATCHLEVEL_INDEX) if min_airflow_version_bump: v = bump_version(v, VERSION_MINOR_INDEX) provider_yaml_path = get_provider_yaml(provider_id) original_provider_yaml_content = provider_yaml_path.read_text() updated_provider_yaml_content = re.sub( r"^versions:", f"versions:\n - {v}", original_provider_yaml_content, 1, re.MULTILINE ) provider_yaml_path.write_text(updated_provider_yaml_content) get_console().print(f"[special]Bumped version to {v}\n") return with_breaking_changes, maybe_with_new_features, original_provider_yaml_content
Updates provider version based on the type of change selected by the user :param type_of_change: type of change selected :param provider_id: provider package :param min_airflow_version_bump: if set, ensure that the version bump is at least feature version. :return: tuple of two bools: (with_breaking_change, maybe_with_new_features, original_text)
python
dev/breeze/src/airflow_breeze/prepare_providers/provider_documentation.py
572
[ "provider_id", "type_of_change", "min_airflow_version_bump" ]
tuple[bool, bool, str]
true
6
7.92
apache/airflow
43,597
sphinx
false
extractCauseUnchecked
public static ConcurrentRuntimeException extractCauseUnchecked(final ExecutionException ex) { if (ex == null || ex.getCause() == null) { return null; } ExceptionUtils.throwUnchecked(ex.getCause()); return new ConcurrentRuntimeException(ex.getMessage(), ex.getCause()); }
Inspects the cause of the specified {@link ExecutionException} and creates a {@link ConcurrentRuntimeException} with the checked cause if necessary. This method works exactly like {@link #extractCause(ExecutionException)}. The only difference is that the cause of the specified {@link ExecutionException} is extracted as a runtime exception. This is an alternative for client code that does not want to deal with checked exceptions. @param ex the exception to be processed @return a {@link ConcurrentRuntimeException} with the checked cause
java
src/main/java/org/apache/commons/lang3/concurrent/ConcurrentUtils.java
226
[ "ex" ]
ConcurrentRuntimeException
true
3
7.44
apache/commons-lang
2,896
javadoc
false
charSetMatcher
public static StrMatcher charSetMatcher(final char... chars) { if (ArrayUtils.isEmpty(chars)) { return NONE_MATCHER; } if (chars.length == 1) { return new CharMatcher(chars[0]); } return new CharSetMatcher(chars); }
Creates a matcher from a set of characters. @param chars the characters to match, null or empty matches nothing. @return a new matcher for the given char[].
java
src/main/java/org/apache/commons/lang3/text/StrMatcher.java
255
[]
StrMatcher
true
3
8.24
apache/commons-lang
2,896
javadoc
false
createEntries
@Override Collection<Entry<K, V>> createEntries() { if (this instanceof SetMultimap) { return new EntrySet(); } else { return new Entries(); } }
{@inheritDoc} <p>The iterator generated by the returned collection traverses the values for one key, followed by the values of a second key, and so on. <p>Each entry is an immutable snapshot of a key-value mapping in the multimap, taken at the time the entry is returned by a method call to the collection or its iterator.
java
android/guava/src/com/google/common/collect/AbstractMapBasedMultimap.java
1,247
[]
true
2
6.4
google/guava
51,352
javadoc
false
datapath
def datapath(strict_data_files: str) -> Callable[..., str]: """ Get the path to a data file. Parameters ---------- path : str Path to the file, relative to ``pandas/tests/`` Returns ------- path including ``pandas/tests``. Raises ------ ValueError If the path doesn't exist and the --no-strict-data-files option is not set. """ BASE_PATH = os.path.join(os.path.dirname(__file__), "tests") def deco(*args): path = os.path.join(BASE_PATH, *args) if not os.path.exists(path): if strict_data_files: raise ValueError( f"Could not find file {path} and --no-strict-data-files is not set." ) pytest.skip(f"Could not find {path}.") return path return deco
Get the path to a data file. Parameters ---------- path : str Path to the file, relative to ``pandas/tests/`` Returns ------- path including ``pandas/tests``. Raises ------ ValueError If the path doesn't exist and the --no-strict-data-files option is not set.
python
pandas/conftest.py
1,144
[ "strict_data_files" ]
Callable[..., str]
true
3
6.88
pandas-dev/pandas
47,362
numpy
false
wrapper
def wrapper(fn: Callable[_P, _R]) -> Callable[_P, _R]: """Wrap the function to enable memoization. Args: fn: The function to wrap. Returns: A wrapped version of the function. """ # If caching is disabled, return the original function unchanged if not config.IS_CACHING_MODULE_ENABLED(): return fn def inner(*args: _P.args, **kwargs: _P.kwargs) -> _R: """Call the original function and cache the result. Args: *args: Positional arguments to pass to the function. **kwargs: Keyword arguments to pass to the function. Returns: The result of calling the original function. """ # Call the function to compute the result result = fn(*args, **kwargs) # Generate cache key from parameters cache_key = self._make_key(custom_params_encoder, *args, **kwargs) # Encode params for human-readable dump if custom_params_encoder is not None: encoded_params = custom_params_encoder(*args, **kwargs) else: encoded_params = { "args": args, "kwargs": kwargs, } # Encode the result if encoder is provided if custom_result_encoder is not None: # Get the encoder function by calling the factory with params encoder_fn = custom_result_encoder(*args, **kwargs) encoded_result = encoder_fn(result) else: encoded_result = result # Store CacheEntry in cache cache_entry = CacheEntry( encoded_params=encoded_params, encoded_result=encoded_result, ) self._cache.insert(cache_key, cache_entry) # Return the original result (not the encoded version) return result return inner
Wrap the function to enable memoization. Args: fn: The function to wrap. Returns: A wrapped version of the function.
python
torch/_inductor/runtime/caching/interfaces.py
454
[ "fn" ]
Callable[_P, _R]
true
6
8.24
pytorch/pytorch
96,034
google
false
generateBitVectors
public static <E extends Enum<E>> long[] generateBitVectors(final Class<E> enumClass, final Iterable<? extends E> values) { asEnum(enumClass); Objects.requireNonNull(values, "values"); final EnumSet<E> condensed = EnumSet.noneOf(enumClass); values.forEach(constant -> condensed.add(Objects.requireNonNull(constant, NULL_ELEMENTS_NOT_PERMITTED))); final long[] result = new long[(enumClass.getEnumConstants().length - 1) / Long.SIZE + 1]; for (final E value : condensed) { result[value.ordinal() / Long.SIZE] |= 1L << value.ordinal() % Long.SIZE; } ArrayUtils.reverse(result); return result; }
Creates a bit vector representation of the given subset of an Enum using as many {@code long}s as needed. <p>This generates a value that is usable by {@link EnumUtils#processBitVectors}.</p> <p>Use this method if you have more than 64 values in your Enum.</p> @param enumClass the class of the enum we are working with, not {@code null}. @param values the values we want to convert, not {@code null}, neither containing {@code null}. @param <E> the type of the enumeration. @return a long[] whose values provide a binary representation of the given set of enum values with the least significant digits rightmost. @throws NullPointerException if {@code enumClass} or {@code values} is {@code null}. @throws IllegalArgumentException if {@code enumClass} is not an enum class, or if any {@code values} {@code null}. @since 3.2
java
src/main/java/org/apache/commons/lang3/EnumUtils.java
178
[ "enumClass", "values" ]
true
1
6.88
apache/commons-lang
2,896
javadoc
false
parseDelimitedList
function parseDelimitedList<T extends Node | undefined>(kind: ParsingContext, parseElement: () => T, considerSemicolonAsDelimiter?: boolean): NodeArray<NonNullable<T>> | undefined { const saveParsingContext = parsingContext; parsingContext |= 1 << kind; const list: NonNullable<T>[] = []; const listPos = getNodePos(); let commaStart = -1; // Meaning the previous token was not a comma while (true) { if (isListElement(kind, /*inErrorRecovery*/ false)) { const startPos = scanner.getTokenFullStart(); const result = parseListElement(kind, parseElement); if (!result) { parsingContext = saveParsingContext; return undefined; } list.push(result); commaStart = scanner.getTokenStart(); if (parseOptional(SyntaxKind.CommaToken)) { // No need to check for a zero length node since we know we parsed a comma continue; } commaStart = -1; // Back to the state where the last token was not a comma if (isListTerminator(kind)) { break; } // We didn't get a comma, and the list wasn't terminated, explicitly parse // out a comma so we give a good error message. parseExpected(SyntaxKind.CommaToken, getExpectedCommaDiagnostic(kind)); // If the token was a semicolon, and the caller allows that, then skip it and // continue. This ensures we get back on track and don't result in tons of // parse errors. For example, this can happen when people do things like use // a semicolon to delimit object literal members. Note: we'll have already // reported an error when we called parseExpected above. if (considerSemicolonAsDelimiter && token() === SyntaxKind.SemicolonToken && !scanner.hasPrecedingLineBreak()) { nextToken(); } if (startPos === scanner.getTokenFullStart()) { // What we're parsing isn't actually remotely recognizable as a element and we've consumed no tokens whatsoever // Consume a token to advance the parser in some way and avoid an infinite loop // This can happen when we're speculatively parsing parenthesized expressions which we think may be arrow functions, // or when a modifier keyword which is disallowed as a parameter name (ie, `static` in strict mode) is supplied nextToken(); } continue; } if (isListTerminator(kind)) { break; } if (abortParsingListOrMoveToNextToken(kind)) { break; } } parsingContext = saveParsingContext; // Recording the trailing comma is deliberately done after the previous // loop, and not just if we see a list terminator. This is because the list // may have ended incorrectly, but it is still important to know if there // was a trailing comma. // Check if the last token was a comma. // Always preserve a trailing comma by marking it on the NodeArray return createNodeArray(list, listPos, /*end*/ undefined, commaStart >= 0); }
Reports a diagnostic error for the current token being an invalid name. @param blankDiagnostic Diagnostic to report for the case of the name being blank (matched tokenIfBlankName). @param nameDiagnostic Diagnostic to report for all other cases. @param tokenIfBlankName Current token if the name was invalid for being blank (not provided / skipped).
typescript
src/compiler/parser.ts
3,492
[ "kind", "parseElement", "considerSemicolonAsDelimiter?" ]
true
12
6.8
microsoft/TypeScript
107,154
jsdoc
false
findMatchingMethod
@Override protected @Nullable Method findMatchingMethod() { Method matchingMethod = super.findMatchingMethod(); // Second pass: look for method where arguments can be converted to parameter types. if (matchingMethod == null) { // Interpret argument array as individual method arguments. matchingMethod = doFindMatchingMethod(getArguments()); } if (matchingMethod == null) { // Interpret argument array as single method argument of array type. matchingMethod = doFindMatchingMethod(new Object[] {getArguments()}); } return matchingMethod; }
This implementation looks for a method with matching parameter types. @see #doFindMatchingMethod
java
spring-beans/src/main/java/org/springframework/beans/support/ArgumentConvertingMethodInvoker.java
112
[]
Method
true
3
6.24
spring-projects/spring-framework
59,386
javadoc
false
_date_or_empty
def _date_or_empty(*, task_instance: TaskInstance, attr: str) -> str: """ Fetch a date attribute or None of it does not exist. :param task_instance: the task instance :param attr: the attribute name :meta private: """ result: datetime | None = getattr(task_instance, attr, None) return result.strftime("%Y%m%dT%H%M%S") if result else ""
Fetch a date attribute or None of it does not exist. :param task_instance: the task instance :param attr: the attribute name :meta private:
python
airflow-core/src/airflow/models/taskinstance.py
347
[ "task_instance", "attr" ]
str
true
2
7.04
apache/airflow
43,597
sphinx
false
isPrimary
protected boolean isPrimary(String beanName, Object beanInstance) { String transformedBeanName = transformedBeanName(beanName); if (containsBeanDefinition(transformedBeanName)) { return getMergedLocalBeanDefinition(transformedBeanName).isPrimary(); } return (getParentBeanFactory() instanceof DefaultListableBeanFactory parent && parent.isPrimary(transformedBeanName, beanInstance)); }
Return whether the bean definition for the given bean name has been marked as a primary bean. @param beanName the name of the bean @param beanInstance the corresponding bean instance (can be {@code null}) @return whether the given bean qualifies as primary
java
spring-beans/src/main/java/org/springframework/beans/factory/support/DefaultListableBeanFactory.java
2,181
[ "beanName", "beanInstance" ]
true
3
7.76
spring-projects/spring-framework
59,386
javadoc
false
chooseRandomFlags
function chooseRandomFlags(experiments, additionalFlags) { // Add additional flags to second config based on experiment percentages. const extra_flags = []; for (const [p, flags] of additionalFlags) { if (random.choose(p)) { for (const flag of flags.split(' ')) { extra_flags.push('--second-config-extra-flags=' + flag); } } } // Calculate flags determining the experiment. let acc = 0; const threshold = random.random() * 100; for (let [prob, first_config, second_config, second_d8] of experiments) { acc += prob; if (acc > threshold) { return [ '--first-config=' + first_config, '--second-config=' + second_config, '--second-d8=' + second_d8, ].concat(extra_flags); } } // Unreachable. assert(false); }
Randomly chooses a configuration from experiments. The configuration parameters are expected to be passed from a bundled V8 build. Constraints mentioned below are enforced by PRESUBMIT checks on the V8 side. @param {Object[]} experiments List of tuples (probability, first config name, second config name, second d8 name). The probabilities are integers in [0,100]. We assume the sum of all probabilities is 100. @param {Object[]} additionalFlags List of tuples (probability, flag strings). Probability is in [0,1). @return {string[]} List of flags for v8_foozzie.py.
javascript
deps/v8/tools/clusterfuzz/js_fuzzer/differential_script_mutator.js
40
[ "experiments", "additionalFlags" ]
false
3
6.08
nodejs/node
114,839
jsdoc
false
getObject
@Override public @Nullable Object getObject() throws IllegalAccessException { if (this.fieldObject == null) { throw new FactoryBeanNotInitializedException(); } ReflectionUtils.makeAccessible(this.fieldObject); if (this.targetObject != null) { // instance field return this.fieldObject.get(this.targetObject); } else { // class field return this.fieldObject.get(null); } }
The bean name of this FieldRetrievingFactoryBean will be interpreted as "staticField" pattern, if neither "targetClass" nor "targetObject" nor "targetField" have been specified. This allows for concise bean definitions with just an id/name.
java
spring-beans/src/main/java/org/springframework/beans/factory/config/FieldRetrievingFactoryBean.java
203
[]
Object
true
3
6.24
spring-projects/spring-framework
59,386
javadoc
false
group_tensors_by_device_and_dtype
def group_tensors_by_device_and_dtype(tensorlistlist, with_indices=False): """Pure Python implementation of torch._C._group_tensors_by_device_and_dtype. Groups tensors by their device and dtype. This is useful before sending tensors off to a foreach implementation, which requires tensors to be on one device and dtype. Args: tensorlistlist: A list of lists of tensors (tensors can be None). with_indices: If True, track original indices in the output. Returns: A dict mapping (device, dtype) tuples to (grouped_tensorlistlist, indices). """ # Result dict: (device, dtype) -> (list of lists, indices) result: dict[tuple[torch.device, torch.dtype], tuple[list[list], list[int]]] = {} if not tensorlistlist or not tensorlistlist[0]: return result num_lists = len(tensorlistlist) num_tensors = len(tensorlistlist[0]) for idx in range(num_tensors): # Find the first non-None tensor at this index to get device and dtype first_tensor = None for tlist in tensorlistlist: if tlist is not None and idx < len(tlist) and tlist[idx] is not None: first_tensor = tlist[idx] break if first_tensor is None: # All tensors at this index are None, skip continue key = (first_tensor.device, first_tensor.dtype) if key not in result: # Initialize empty lists for each tensorlist result[key] = ([[] for _ in range(num_lists)], []) grouped_lists, indices = result[key] # Add tensors from each list at this index for list_idx, tlist in enumerate(tensorlistlist): if tlist is not None and idx < len(tlist): grouped_lists[list_idx].append(tlist[idx]) else: grouped_lists[list_idx].append(None) if with_indices: indices.append(idx) return result
Pure Python implementation of torch._C._group_tensors_by_device_and_dtype. Groups tensors by their device and dtype. This is useful before sending tensors off to a foreach implementation, which requires tensors to be on one device and dtype. Args: tensorlistlist: A list of lists of tensors (tensors can be None). with_indices: If True, track original indices in the output. Returns: A dict mapping (device, dtype) tuples to (grouped_tensorlistlist, indices).
python
torch/_dynamo/polyfills/__init__.py
446
[ "tensorlistlist", "with_indices" ]
false
15
6.96
pytorch/pytorch
96,034
google
false
parseNestedCustomElement
private @Nullable BeanDefinitionHolder parseNestedCustomElement(Element ele, @Nullable BeanDefinition containingBd) { BeanDefinition innerDefinition = parseCustomElement(ele, containingBd); if (innerDefinition == null) { error("Incorrect usage of element '" + ele.getNodeName() + "' in a nested manner. " + "This tag cannot be used nested inside <property>.", ele); return null; } String id = ele.getNodeName() + BeanDefinitionReaderUtils.GENERATED_BEAN_NAME_SEPARATOR + ObjectUtils.getIdentityHexString(innerDefinition); if (logger.isTraceEnabled()) { logger.trace("Using generated bean name [" + id + "] for nested custom element '" + ele.getNodeName() + "'"); } return new BeanDefinitionHolder(innerDefinition, id); }
Decorate the given bean definition through a namespace handler, if applicable. @param node the current child node @param originalDef the current bean definition @param containingBd the containing bean definition (if any) @return the decorated bean definition
java
spring-beans/src/main/java/org/springframework/beans/factory/xml/BeanDefinitionParserDelegate.java
1,456
[ "ele", "containingBd" ]
BeanDefinitionHolder
true
3
7.28
spring-projects/spring-framework
59,386
javadoc
false
compareAndSet
public final boolean compareAndSet(double expect, double update) { return value.compareAndSet(doubleToRawLongBits(expect), doubleToRawLongBits(update)); }
Atomically sets the value to the given updated value if the current value is <a href="#bitEquals">bitwise equal</a> to the expected value. @param expect the expected value @param update the new value @return {@code true} if successful. False return indicates that the actual value was not bitwise equal to the expected value.
java
android/guava/src/com/google/common/util/concurrent/AtomicDouble.java
124
[ "expect", "update" ]
true
1
6.64
google/guava
51,352
javadoc
false
contains
public boolean contains(Option option) { return this.options.contains(option); }
Returns if the given option is contained in this set. @param option the option to check @return {@code true} of the option is present
java
core/spring-boot/src/main/java/org/springframework/boot/context/config/ConfigData.java
197
[ "option" ]
true
1
6.96
spring-projects/spring-boot
79,428
javadoc
false
without
public Options without(Option option) { return copy((options) -> options.remove(option)); }
Create a new {@link Options} instance that contains the options in this set excluding the given option. @param option the option to exclude @return a new {@link Options} instance
java
core/spring-boot/src/main/java/org/springframework/boot/context/config/ConfigData.java
229
[ "option" ]
Options
true
1
6.96
spring-projects/spring-boot
79,428
javadoc
false
toZoneId
private static ZoneId toZoneId(final TimeZone timeZone) { return TimeZones.toTimeZone(timeZone).toZoneId(); }
Converts a {@link Date} to a {@link ZonedDateTime}. @param date the Date to convert to a ZonedDateTime, not null. @param timeZone the time zone, null maps to to the default time zone. @return a new ZonedDateTime. @since 3.19.0
java
src/main/java/org/apache/commons/lang3/time/DateUtils.java
1,701
[ "timeZone" ]
ZoneId
true
1
6.64
apache/commons-lang
2,896
javadoc
false
get_loop_body_lowp_fp
def get_loop_body_lowp_fp(_body: LoopBody) -> tuple[Optional[torch.dtype], bool]: """ Returns the low precision data type (torch.float16/torch.bfloat16) contained in the nodes and if all the nodes can codegen with this data type without converting to float. Otherwise returns None and True. """ sub_blocks = [_body.root_block] + list(_body.subblocks.values()) _lowp_fp_type: Optional[torch.dtype] = None _use_fp32 = False for sub_block in sub_blocks: for _node in sub_block.graph.nodes: if _node.op == "placeholder" or _node.target in ( "get_index", "index_expr", ): continue # Fast path if all operations can support bf16/fp16 without converting to fp32 if _node.target not in [ "load", "store", "abs", "neg", "output", ]: _use_fp32 = True if hasattr(_node, "meta") and _node.meta: assert OptimizationContext.key in _node.meta opt_ctx: OptimizationContext = _node.meta[OptimizationContext.key] if not opt_ctx.dtype or opt_ctx.dtype not in DTYPE_LOWP_FP: _use_fp32 = True elif _lowp_fp_type is not None: if _lowp_fp_type != opt_ctx.dtype: warnings.warn("bf16 and fp16 are mixed in the scheduler node.") else: _lowp_fp_type = opt_ctx.dtype else: _use_fp32 = True return _lowp_fp_type, _use_fp32
Returns the low precision data type (torch.float16/torch.bfloat16) contained in the nodes and if all the nodes can codegen with this data type without converting to float. Otherwise returns None and True.
python
torch/_inductor/codegen/cpp.py
3,733
[ "_body" ]
tuple[Optional[torch.dtype], bool]
true
14
6
pytorch/pytorch
96,034
unknown
false
emit_metric
def emit_metric( metric_name: str, metrics: dict[str, Any], ) -> None: """ Upload a metric to DynamoDB (and from there, the HUD backend database). Even if EMIT_METRICS is set to False, this function will still run the code to validate and shape the metrics, skipping just the upload. Parameters: metric_name: Name of the metric. Every unique metric should have a different name and be emitted just once per run attempt. Metrics are namespaced by their module and the function that emitted them. metrics: The actual data to record. Some default values are populated from environment variables, which must be set for metrics to be emitted. (If they're not set, this function becomes a noop): """ if metrics is None: raise ValueError("You didn't ask to upload any metrics!") # Merge the given metrics with the global metrics, overwriting any duplicates # with the given metrics. metrics = {**global_metrics, **metrics} # We use these env vars that to determine basic info about the workflow run. # By using env vars, we don't have to pass this info around to every function. # It also helps ensure that we only emit metrics during CI env_var_metrics = [ EnvVarMetric("repo", "GITHUB_REPOSITORY"), EnvVarMetric("workflow", "GITHUB_WORKFLOW"), EnvVarMetric("build_environment", "BUILD_ENVIRONMENT", required=False), EnvVarMetric("job", "GITHUB_JOB"), EnvVarMetric("test_config", "TEST_CONFIG", required=False), EnvVarMetric("pr_number", "PR_NUMBER", required=False, type_conversion_fn=int), EnvVarMetric("run_id", "GITHUB_RUN_ID", type_conversion_fn=int), EnvVarMetric("run_number", "GITHUB_RUN_NUMBER", type_conversion_fn=int), EnvVarMetric("run_attempt", "GITHUB_RUN_ATTEMPT", type_conversion_fn=int), EnvVarMetric("job_id", "JOB_ID", type_conversion_fn=int), EnvVarMetric("job_name", "JOB_NAME"), ] # Use info about the function that invoked this one as a namespace and a way to filter metrics. calling_frame = inspect.currentframe().f_back # type: ignore[union-attr] calling_frame_info = inspect.getframeinfo(calling_frame) # type: ignore[arg-type] calling_file = os.path.basename(calling_frame_info.filename) calling_module = inspect.getmodule(calling_frame).__name__ # type: ignore[union-attr] calling_function = calling_frame_info.function try: default_metrics = { "metric_name": metric_name, "calling_file": calling_file, "calling_module": calling_module, "calling_function": calling_function, "timestamp": datetime.datetime.now(timezone.utc).strftime( "%Y-%m-%d %H:%M:%S.%f" ), **{m.name: m.value() for m in env_var_metrics if m.value()}, } except ValueError as e: warn(f"Not emitting metrics for {metric_name}. {e}") return # Prefix key with metric name and timestamp to derisk chance of a uuid1 name collision s3_key = f"{metric_name}_{int(time.time())}_{uuid.uuid1().hex}" if EMIT_METRICS: try: upload_to_s3( bucket_name="ossci-raw-job-status", key=f"ossci_uploaded_metrics/{s3_key}", docs=[{**default_metrics, "info": metrics}], ) except Exception as e: # We don't want to fail the job if we can't upload the metric. # We still raise the ValueErrors outside this try block since those indicate improperly configured metrics warn(f"Error uploading metric {metric_name} to DynamoDB: {e}") return else: print(f"Not emitting metrics for {metric_name}. Boto wasn't imported.")
Upload a metric to DynamoDB (and from there, the HUD backend database). Even if EMIT_METRICS is set to False, this function will still run the code to validate and shape the metrics, skipping just the upload. Parameters: metric_name: Name of the metric. Every unique metric should have a different name and be emitted just once per run attempt. Metrics are namespaced by their module and the function that emitted them. metrics: The actual data to record. Some default values are populated from environment variables, which must be set for metrics to be emitted. (If they're not set, this function becomes a noop):
python
tools/stats/upload_metrics.py
76
[ "metric_name", "metrics" ]
None
true
4
6.96
pytorch/pytorch
96,034
google
false
_is_dtype_type
def _is_dtype_type(arr_or_dtype, condition) -> bool: """ Return true if the condition is satisfied for the arr_or_dtype. Parameters ---------- arr_or_dtype : array-like or dtype The array-like or dtype object whose dtype we want to extract. condition : callable[Union[np.dtype, ExtensionDtypeType]] Returns ------- bool : if the condition is satisfied for the arr_or_dtype """ if arr_or_dtype is None: return condition(type(None)) # fastpath if isinstance(arr_or_dtype, np.dtype): return condition(arr_or_dtype.type) elif isinstance(arr_or_dtype, type): if issubclass(arr_or_dtype, ExtensionDtype): arr_or_dtype = arr_or_dtype.type return condition(np.dtype(arr_or_dtype).type) # if we have an array-like if hasattr(arr_or_dtype, "dtype"): arr_or_dtype = arr_or_dtype.dtype # we are not possibly a dtype elif is_list_like(arr_or_dtype): return condition(type(None)) try: tipo = pandas_dtype(arr_or_dtype).type except (TypeError, ValueError): if is_scalar(arr_or_dtype): return condition(type(None)) return False return condition(tipo)
Return true if the condition is satisfied for the arr_or_dtype. Parameters ---------- arr_or_dtype : array-like or dtype The array-like or dtype object whose dtype we want to extract. condition : callable[Union[np.dtype, ExtensionDtypeType]] Returns ------- bool : if the condition is satisfied for the arr_or_dtype
python
pandas/core/dtypes/common.py
1,659
[ "arr_or_dtype", "condition" ]
bool
true
8
6.4
pandas-dev/pandas
47,362
numpy
false
recode_for_categories
def recode_for_categories( codes: np.ndarray, old_categories, new_categories, *, copy: bool = True, warn: bool = False, ) -> np.ndarray: """ Convert a set of codes for to a new set of categories Parameters ---------- codes : np.ndarray old_categories, new_categories : Index copy: bool, default True Whether to copy if the codes are unchanged. warn : bool, default False Whether to warn on silent-NA mapping. Returns ------- new_codes : np.ndarray[np.int64] Examples -------- >>> old_cat = pd.Index(["b", "a", "c"]) >>> new_cat = pd.Index(["a", "b"]) >>> codes = np.array([0, 1, 1, 2]) >>> recode_for_categories(codes, old_cat, new_cat, copy=True) array([ 1, 0, 0, -1], dtype=int8) """ if len(old_categories) == 0: # All null anyway, so just retain the nulls if copy: return codes.copy() return codes elif new_categories.equals(old_categories): # Same categories, so no need to actually recode if copy: return codes.copy() return codes codes_in_old_cats = new_categories.get_indexer_for(old_categories) if warn: wrong = codes_in_old_cats == -1 if wrong.any(): warnings.warn( "Constructing a Categorical with a dtype and values containing " "non-null entries not in that dtype's categories is deprecated " "and will raise in a future version.", Pandas4Warning, stacklevel=find_stack_level(), ) indexer = coerce_indexer_dtype(codes_in_old_cats, new_categories) new_codes = take_nd(indexer, codes, fill_value=-1) return new_codes
Convert a set of codes for to a new set of categories Parameters ---------- codes : np.ndarray old_categories, new_categories : Index copy: bool, default True Whether to copy if the codes are unchanged. warn : bool, default False Whether to warn on silent-NA mapping. Returns ------- new_codes : np.ndarray[np.int64] Examples -------- >>> old_cat = pd.Index(["b", "a", "c"]) >>> new_cat = pd.Index(["a", "b"]) >>> codes = np.array([0, 1, 1, 2]) >>> recode_for_categories(codes, old_cat, new_cat, copy=True) array([ 1, 0, 0, -1], dtype=int8)
python
pandas/core/arrays/categorical.py
3,055
[ "codes", "old_categories", "new_categories", "copy", "warn" ]
np.ndarray
true
7
8.4
pandas-dev/pandas
47,362
numpy
false
isMaxValAllBitSetLiteral
static bool isMaxValAllBitSetLiteral(const EnumDecl *EnumDec) { auto EnumConst = std::max_element( EnumDec->enumerator_begin(), EnumDec->enumerator_end(), [](const EnumConstantDecl *E1, const EnumConstantDecl *E2) { return E1->getInitVal() < E2->getInitVal(); }); if (const Expr *InitExpr = EnumConst->getInitExpr()) { return EnumConst->getInitVal().countr_one() == EnumConst->getInitVal().getActiveBits() && isa<IntegerLiteral>(InitExpr->IgnoreImpCasts()); } return false; }
Return the number of EnumConstantDecls in an EnumDecl.
cpp
clang-tools-extra/clang-tidy/bugprone/SuspiciousEnumUsageCheck.cpp
76
[]
true
3
6.56
llvm/llvm-project
36,021
doxygen
false
skipToEndOfLine
private void skipToEndOfLine() { for (; this.pos < this.in.length(); this.pos++) { char c = this.in.charAt(this.pos); if (c == '\r' || c == '\n') { this.pos++; break; } } }
Advances the position until after the next newline character. If the line is terminated by "\r\n", the '\n' must be consumed as whitespace by the caller.
java
cli/spring-boot-cli/src/json-shade/java/org/springframework/boot/cli/json/JSONTokener.java
167
[]
void
true
4
7.04
spring-projects/spring-boot
79,428
javadoc
false
loadEnvFile
function loadEnvFile(path = undefined) { // Provide optional value so that `loadEnvFile.length` returns 0 if (path != null) { getValidatedPath ??= require('internal/fs/utils').getValidatedPath; path = getValidatedPath(path); _loadEnvFile(path); } else { _loadEnvFile(); } }
Loads the `.env` file to process.env. @param {string | URL | Buffer | undefined} path
javascript
lib/internal/process/per_thread.js
356
[]
false
3
6.24
nodejs/node
114,839
jsdoc
false
hasEntry
public boolean hasEntry(String name) { NestedJarEntry lastEntry = this.lastEntry; if (lastEntry != null && name.equals(lastEntry.getName())) { return true; } ZipContent.Entry entry = getVersionedContentEntry(name); if (entry != null) { return true; } synchronized (this) { ensureOpen(); return this.resources.zipContent().hasEntry(null, name); } }
Return if an entry with the given name exists. @param name the name to check @return if the entry exists
java
loader/spring-boot-loader/src/main/java/org/springframework/boot/loader/jar/NestedJarFile.java
242
[ "name" ]
true
4
8.08
spring-projects/spring-boot
79,428
javadoc
false
file_path_to_url
def file_path_to_url(path: str) -> str: """ converts an absolute native path to a FILE URL. Parameters ---------- path : a path in native format Returns ------- a valid FILE URL """ # lazify expensive import (~30ms) from urllib.request import pathname2url return urljoin("file:", pathname2url(path))
converts an absolute native path to a FILE URL. Parameters ---------- path : a path in native format Returns ------- a valid FILE URL
python
pandas/io/common.py
486
[ "path" ]
str
true
1
6
pandas-dev/pandas
47,362
numpy
false
to_string
def to_string( self, buf: FilePath | WriteBuffer[str] | None = None, na_rep: str = "NaN", float_format: str | None = None, header: bool = True, index: bool = True, length: bool = False, dtype: bool = False, name: bool = False, max_rows: int | None = None, min_rows: int | None = None, ) -> str | None: """ Render a string representation of the Series. Parameters ---------- buf : StringIO-like, optional Buffer to write to. na_rep : str, optional String representation of NaN to use, default 'NaN'. float_format : one-parameter function, optional Formatter function to apply to columns' elements if they are floats, default None. header : bool, default True Add the Series header (index name). index : bool, optional Add index (row) labels, default True. length : bool, default False Add the Series length. dtype : bool, default False Add the Series dtype. name : bool, default False Add the Series name if not None. max_rows : int, optional Maximum number of rows to show before truncating. If None, show all. min_rows : int, optional The number of rows to display in a truncated repr (when number of rows is above `max_rows`). Returns ------- str or None String representation of Series if ``buf=None``, otherwise None. See Also -------- Series.to_dict : Convert Series to dict object. Series.to_frame : Convert Series to DataFrame object. Series.to_markdown : Print Series in Markdown-friendly format. Series.to_timestamp : Cast to DatetimeIndex of Timestamps. Examples -------- >>> ser = pd.Series([1, 2, 3]).to_string() >>> ser '0 1\\n1 2\\n2 3' """ formatter = fmt.SeriesFormatter( self, name=name, length=length, header=header, index=index, dtype=dtype, na_rep=na_rep, float_format=float_format, min_rows=min_rows, max_rows=max_rows, ) result = formatter.to_string() # catch contract violations if not isinstance(result, str): raise AssertionError( "result must be of type str, type " f"of result is {type(result).__name__!r}" ) if buf is None: return result else: if hasattr(buf, "write"): buf.write(result) else: with open(buf, "w", encoding="utf-8") as f: f.write(result) return None
Render a string representation of the Series. Parameters ---------- buf : StringIO-like, optional Buffer to write to. na_rep : str, optional String representation of NaN to use, default 'NaN'. float_format : one-parameter function, optional Formatter function to apply to columns' elements if they are floats, default None. header : bool, default True Add the Series header (index name). index : bool, optional Add index (row) labels, default True. length : bool, default False Add the Series length. dtype : bool, default False Add the Series dtype. name : bool, default False Add the Series name if not None. max_rows : int, optional Maximum number of rows to show before truncating. If None, show all. min_rows : int, optional The number of rows to display in a truncated repr (when number of rows is above `max_rows`). Returns ------- str or None String representation of Series if ``buf=None``, otherwise None. See Also -------- Series.to_dict : Convert Series to dict object. Series.to_frame : Convert Series to DataFrame object. Series.to_markdown : Print Series in Markdown-friendly format. Series.to_timestamp : Cast to DatetimeIndex of Timestamps. Examples -------- >>> ser = pd.Series([1, 2, 3]).to_string() >>> ser '0 1\\n1 2\\n2 3'
python
pandas/core/series.py
1,491
[ "self", "buf", "na_rep", "float_format", "header", "index", "length", "dtype", "name", "max_rows", "min_rows" ]
str | None
true
6
8.56
pandas-dev/pandas
47,362
numpy
false
addPropertyValues
public MutablePropertyValues addPropertyValues(@Nullable Map<?, ?> other) { if (other != null) { other.forEach((attrName, attrValue) -> addPropertyValue( new PropertyValue(attrName.toString(), attrValue))); } return this; }
Add all property values from the given Map. @param other a Map with property values keyed by property name, which must be a String @return this in order to allow for adding multiple property values in a chain
java
spring-beans/src/main/java/org/springframework/beans/MutablePropertyValues.java
159
[ "other" ]
MutablePropertyValues
true
2
8.24
spring-projects/spring-framework
59,386
javadoc
false
verifyIndex
private boolean verifyIndex(int i) { if ((getLeftChildIndex(i) < size) && (compareElements(i, getLeftChildIndex(i)) > 0)) { return false; } if ((getRightChildIndex(i) < size) && (compareElements(i, getRightChildIndex(i)) > 0)) { return false; } if ((i > 0) && (compareElements(i, getParentIndex(i)) > 0)) { return false; } if ((i > 2) && (compareElements(getGrandparentIndex(i), i) > 0)) { return false; } return true; }
Fills the hole at {@code index} by moving in the least of its grandchildren to this position, then recursively filling the new hole created. @return the position of the new hole (where the lowest grandchild moved from, that had no grandchild to replace it)
java
android/guava/src/com/google/common/collect/MinMaxPriorityQueue.java
729
[ "i" ]
true
9
6.72
google/guava
51,352
javadoc
false
create
public static <T> EventListenerSupport<T> create(final Class<T> listenerInterface) { return new EventListenerSupport<>(listenerInterface); }
Creates an EventListenerSupport object which supports the specified listener type. @param <T> the type of the listener interface @param listenerInterface the type of listener interface that will receive events posted using this class. @return an EventListenerSupport object which supports the specified listener type. @throws NullPointerException if {@code listenerInterface} is {@code null}. @throws IllegalArgumentException if {@code listenerInterface} is not an interface.
java
src/main/java/org/apache/commons/lang3/event/EventListenerSupport.java
153
[ "listenerInterface" ]
true
1
6.16
apache/commons-lang
2,896
javadoc
false
processBitVectors
public static <E extends Enum<E>> EnumSet<E> processBitVectors(final Class<E> enumClass, final long... values) { final EnumSet<E> results = EnumSet.noneOf(asEnum(enumClass)); final long[] lvalues = ArrayUtils.clone(Objects.requireNonNull(values, "values")); ArrayUtils.reverse(lvalues); stream(enumClass).forEach(constant -> { final int block = constant.ordinal() / Long.SIZE; if (block < lvalues.length && (lvalues[block] & 1L << constant.ordinal() % Long.SIZE) != 0) { results.add(constant); } }); return results; }
Convert a {@code long[]} created by {@link EnumUtils#generateBitVectors} into the set of enum values that it represents. <p>If you store this value, beware any changes to the enum that would affect ordinal values.</p> @param enumClass the class of the enum we are working with, not {@code null}. @param values the long[] bearing the representation of a set of enum values, the least significant digits rightmost, not {@code null}. @param <E> the type of the enumeration. @return a set of enum values. @throws NullPointerException if {@code enumClass} is {@code null}. @throws IllegalArgumentException if {@code enumClass} is not an enum class. @since 3.2
java
src/main/java/org/apache/commons/lang3/EnumUtils.java
444
[ "enumClass" ]
true
3
8.08
apache/commons-lang
2,896
javadoc
false
next_gb_id
def next_gb_id(reg: dict[str, Any]) -> str: """Generate a random unused GB ID from GB0000-GB9999 range.""" used_ids = set(reg.keys()) max_attempts = 100 # Try random selection first for _ in range(max_attempts): candidate = f"GB{random.randint(0, 9999):04d}" if candidate not in used_ids: return candidate # Fallback: find first available ID if random selection keeps colliding for i in range(10000): candidate = f"GB{i:04d}" if candidate not in used_ids: return candidate raise RuntimeError("No available GB IDs in range GB0000-GB9999")
Generate a random unused GB ID from GB0000-GB9999 range.
python
tools/dynamo/gb_id_mapping.py
26
[ "reg" ]
str
true
5
6
pytorch/pytorch
96,034
unknown
false
createArray
private static Object createArray(Class<?> arrayType) { Assert.notNull(arrayType, "Array type must not be null"); Class<?> componentType = arrayType.componentType(); if (componentType.isArray()) { Object array = Array.newInstance(componentType, 1); Array.set(array, 0, createArray(componentType)); return array; } else { return Array.newInstance(componentType, 0); } }
Create the array for the given array type. @param arrayType the desired type of the target array @return a new array instance
java
spring-beans/src/main/java/org/springframework/beans/AbstractNestablePropertyAccessor.java
927
[ "arrayType" ]
Object
true
2
7.92
spring-projects/spring-framework
59,386
javadoc
false
__set_dag_run_state_to_running_or_queued
def __set_dag_run_state_to_running_or_queued( *, new_state: DagRunState, dag: SerializedDAG, run_id: str | None = None, commit: bool = False, session: SASession, ) -> list[TaskInstance]: """ Set the dag run for a specific logical date to running. :param dag: the DAG of which to alter state :param run_id: the id of the DagRun :param commit: commit DAG and tasks to be altered to the database :param session: database session :return: If commit is true, list of tasks that have been updated, otherwise list of tasks that will be updated """ res: list[TaskInstance] = [] if not dag: return res if not run_id: raise ValueError(f"DagRun with run_id: {run_id} not found") # Mark the dag run to running. if commit: _set_dag_run_state(dag.dag_id, run_id, new_state, session) # To keep the return type consistent with the other similar functions. return res
Set the dag run for a specific logical date to running. :param dag: the DAG of which to alter state :param run_id: the id of the DagRun :param commit: commit DAG and tasks to be altered to the database :param session: database session :return: If commit is true, list of tasks that have been updated, otherwise list of tasks that will be updated
python
airflow-core/src/airflow/api/common/mark_tasks.py
356
[ "new_state", "dag", "run_id", "commit", "session" ]
list[TaskInstance]
true
4
7.92
apache/airflow
43,597
sphinx
false
create_model_package_group
def create_model_package_group(self, package_group_name: str, package_group_desc: str = "") -> bool: """ Create a Model Package Group if it does not already exist. .. seealso:: - :external+boto3:py:meth:`SageMaker.Client.create_model_package_group` :param package_group_name: Name of the model package group to create if not already present. :param package_group_desc: Description of the model package group, if it was to be created (optional). :return: True if the model package group was created, False if it already existed. """ try: res = self.conn.create_model_package_group( ModelPackageGroupName=package_group_name, ModelPackageGroupDescription=package_group_desc, ) self.log.info( "Created new Model Package Group with name %s (ARN: %s)", package_group_name, res["ModelPackageGroupArn"], ) return True except ClientError as e: # ValidationException can also happen if the package group name contains invalid char, # so we have to look at the error message too if e.response["Error"]["Code"] == "ValidationException" and e.response["Error"][ "Message" ].startswith("Model Package Group already exists"): # log msg only so it doesn't look like an error self.log.info("%s", e.response["Error"]["Message"]) return False self.log.error("Error when trying to create Model Package Group: %s", e) raise
Create a Model Package Group if it does not already exist. .. seealso:: - :external+boto3:py:meth:`SageMaker.Client.create_model_package_group` :param package_group_name: Name of the model package group to create if not already present. :param package_group_desc: Description of the model package group, if it was to be created (optional). :return: True if the model package group was created, False if it already existed.
python
providers/amazon/src/airflow/providers/amazon/aws/hooks/sagemaker.py
1,187
[ "self", "package_group_name", "package_group_desc" ]
bool
true
3
7.6
apache/airflow
43,597
sphinx
false
getRightHandSideOfAssignment
function getRightHandSideOfAssignment(rightHandSide: Expression): FunctionExpression | ArrowFunction | ConstructorDeclaration | undefined { while (rightHandSide.kind === SyntaxKind.ParenthesizedExpression) { rightHandSide = (rightHandSide as ParenthesizedExpression).expression; } switch (rightHandSide.kind) { case SyntaxKind.FunctionExpression: case SyntaxKind.ArrowFunction: return (rightHandSide as FunctionExpression); case SyntaxKind.ClassExpression: return find((rightHandSide as ClassExpression).members, isConstructorDeclaration); } }
Checks if position points to a valid position to add JSDoc comments, and if so, returns the appropriate template. Otherwise returns an empty string. Valid positions are - outside of comments, statements, and expressions, and - preceding a: - function/constructor/method declaration - class declarations - variable statements - namespace declarations - interface declarations - method signatures - type alias declarations Hosts should ideally check that: - The line is all whitespace up to 'position' before performing the insertion. - If the keystroke sequence "/\*\*" induced the call, we also check that the next non-whitespace character is '*', which (approximately) indicates whether we added the second '*' to complete an existing (JSDoc) comment. @param fileName The file in which to perform the check. @param position The (character-indexed) position in the file where the check should be performed. @internal
typescript
src/services/jsDoc.ts
634
[ "rightHandSide" ]
true
2
6.24
microsoft/TypeScript
107,154
jsdoc
false
build
@Override public String build() { return toString(); }
Implement the {@link Builder} interface. @return the builder as a String @since 3.2 @see #toString()
java
src/main/java/org/apache/commons/lang3/text/StrBuilder.java
1,564
[]
String
true
1
6.64
apache/commons-lang
2,896
javadoc
false
detectAndParse
public static Duration detectAndParse(String value, DurationFormat.@Nullable Unit unit) { return parse(value, detect(value), unit); }
Detect the style then parse the value to return a duration. @param value the value to parse @param unit the duration unit to use if the value doesn't specify one ({@code null} will default to ms) @return the parsed duration @throws IllegalArgumentException if the value is not a known style or cannot be parsed
java
spring-context/src/main/java/org/springframework/format/datetime/standard/DurationFormatterUtils.java
146
[ "value", "unit" ]
Duration
true
1
6.48
spring-projects/spring-framework
59,386
javadoc
false
deleteRecords
DeleteRecordsResult deleteRecords(Map<TopicPartition, RecordsToDelete> recordsToDelete, DeleteRecordsOptions options);
Delete records whose offset is smaller than the given offset of the corresponding partition. <p> This operation is supported by brokers with version 0.11.0.0 or higher. @param recordsToDelete The topic partitions and related offsets from which records deletion starts. @param options The options to use when deleting records. @return The DeleteRecordsResult.
java
clients/src/main/java/org/apache/kafka/clients/admin/Admin.java
702
[ "recordsToDelete", "options" ]
DeleteRecordsResult
true
1
6.16
apache/kafka
31,560
javadoc
false
addAndGet
public short addAndGet(final Number operand) { this.value += operand.shortValue(); return value; }
Increments this instance's value by {@code operand}; this method returns the value associated with the instance immediately after the addition operation. This method is not thread safe. @param operand the quantity to add, not null. @throws NullPointerException if {@code operand} is null. @return the value associated with this instance after adding the operand. @since 3.5
java
src/main/java/org/apache/commons/lang3/mutable/MutableShort.java
112
[ "operand" ]
true
1
6.64
apache/commons-lang
2,896
javadoc
false
createPrincipalBuilder
public static KafkaPrincipalBuilder createPrincipalBuilder(Map<String, ?> configs, KerberosShortNamer kerberosShortNamer, SslPrincipalMapper sslPrincipalMapper) { Class<?> principalBuilderClass = (Class<?>) configs.get(BrokerSecurityConfigs.PRINCIPAL_BUILDER_CLASS_CONFIG); final KafkaPrincipalBuilder builder; if (principalBuilderClass == null || principalBuilderClass == DefaultKafkaPrincipalBuilder.class) { builder = new DefaultKafkaPrincipalBuilder(kerberosShortNamer, sslPrincipalMapper); } else if (KafkaPrincipalBuilder.class.isAssignableFrom(principalBuilderClass)) { builder = (KafkaPrincipalBuilder) Utils.newInstance(principalBuilderClass); } else { throw new InvalidConfigurationException("Type " + principalBuilderClass.getName() + " is not " + "an instance of " + KafkaPrincipalBuilder.class.getName()); } if (builder instanceof Configurable) ((Configurable) builder).configure(configs); return builder; }
@return a mutable RecordingMap. The elements got from RecordingMap are marked as "used".
java
clients/src/main/java/org/apache/kafka/common/network/ChannelBuilders.java
219
[ "configs", "kerberosShortNamer", "sslPrincipalMapper" ]
KafkaPrincipalBuilder
true
5
6.72
apache/kafka
31,560
javadoc
false
prepareAcquire
private void prepareAcquire() { if (isShutdown()) { throw new IllegalStateException("TimedSemaphore is shut down!"); } if (task == null) { task = startTimer(); } }
Prepares an acquire operation. Checks for the current state and starts the internal timer if necessary. This method must be called with the lock of this object held.
java
src/main/java/org/apache/commons/lang3/concurrent/TimedSemaphore.java
422
[]
void
true
3
7.04
apache/commons-lang
2,896
javadoc
false
appendSeparator
public StrBuilder appendSeparator(final String standard, final String defaultIfEmpty) { final String str = isEmpty() ? defaultIfEmpty : standard; if (str != null) { append(str); } return this; }
Appends one of both separators to the StrBuilder. If the builder is currently empty it will append the defaultIfEmpty-separator Otherwise it will append the standard-separator Appending a null separator will have no effect. The separator is appended using {@link #append(String)}. <p> This method is for example useful for constructing queries </p> <pre> StrBuilder whereClause = new StrBuilder(); if (searchCommand.getPriority() != null) { whereClause.appendSeparator(" and", " where"); whereClause.append(" priority = ?") } if (searchCommand.getComponent() != null) { whereClause.appendSeparator(" and", " where"); whereClause.append(" component = ?") } selectClause.append(whereClause) </pre> @param standard the separator if builder is not empty, null means no separator @param defaultIfEmpty the separator if builder is empty, null means no separator @return {@code this} instance. @since 2.5
java
src/main/java/org/apache/commons/lang3/text/StrBuilder.java
1,359
[ "standard", "defaultIfEmpty" ]
StrBuilder
true
3
7.44
apache/commons-lang
2,896
javadoc
false
_convert_slice_indexer
def _convert_slice_indexer(self, key: slice, kind: Literal["loc", "getitem"]): """ Convert a slice indexer. By definition, these are labels unless 'iloc' is passed in. Floats are not allowed as the start, step, or stop of the slice. Parameters ---------- key : label of the slice bound kind : {'loc', 'getitem'} """ # potentially cast the bounds to integers start, stop, step = key.start, key.stop, key.step # figure out if this is a positional indexer is_index_slice = is_valid_positional_slice(key) # TODO(GH#50617): once Series.__[gs]etitem__ is removed we should be able # to simplify this. if kind == "getitem": # called from the getitem slicers, validate that we are in fact integers if is_index_slice: # In this case the _validate_indexer checks below are redundant return key elif self.dtype.kind in "iu": # Note: these checks are redundant if we know is_index_slice self._validate_indexer("slice", key.start, "getitem") self._validate_indexer("slice", key.stop, "getitem") self._validate_indexer("slice", key.step, "getitem") return key # convert the slice to an indexer here; checking that the user didn't # pass a positional slice to loc is_positional = is_index_slice and self._should_fallback_to_positional # if we are mixed and have integers if is_positional: try: # Validate start & stop if start is not None: self.get_loc(start) if stop is not None: self.get_loc(stop) is_positional = False except KeyError: pass if com.is_null_slice(key): # It doesn't matter if we are positional or label based indexer = key elif is_positional: if kind == "loc": # GH#16121, GH#24612, GH#31810 raise TypeError( "Slicing a positional slice with .loc is not allowed, " "Use .loc with labels or .iloc with positions instead.", ) indexer = key else: indexer = self.slice_indexer(start, stop, step) return indexer
Convert a slice indexer. By definition, these are labels unless 'iloc' is passed in. Floats are not allowed as the start, step, or stop of the slice. Parameters ---------- key : label of the slice bound kind : {'loc', 'getitem'}
python
pandas/core/indexes/base.py
4,046
[ "self", "key", "kind" ]
true
12
7.12
pandas-dev/pandas
47,362
numpy
false
equals
@Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; MetricNameTemplate other = (MetricNameTemplate) o; return Objects.equals(name, other.name) && Objects.equals(group, other.group) && Objects.equals(tags, other.tags); }
Get the set of tag names for the metric. @return the ordered set of tag names; never null but possibly empty
java
clients/src/main/java/org/apache/kafka/common/MetricNameTemplate.java
114
[ "o" ]
true
6
6.88
apache/kafka
31,560
javadoc
false
bean
public AbstractBeanDefinition bean(Class<?> type, Object...args) { GroovyBeanDefinitionWrapper current = this.currentBeanDefinition; try { Closure<?> callable = null; Collection<Object> constructorArgs = null; if (!ObjectUtils.isEmpty(args)) { int index = args.length; Object lastArg = args[index - 1]; if (lastArg instanceof Closure<?> closure) { callable = closure; index--; } constructorArgs = resolveConstructorArguments(args, 0, index); } this.currentBeanDefinition = new GroovyBeanDefinitionWrapper(null, type, constructorArgs); if (callable != null) { callable.call(this.currentBeanDefinition); } return this.currentBeanDefinition.getBeanDefinition(); } finally { this.currentBeanDefinition = current; } }
Define an inner bean definition. @param type the bean type @param args the constructors arguments and closure configurer @return the bean definition
java
spring-beans/src/main/java/org/springframework/beans/factory/groovy/GroovyBeanDefinitionReader.java
315
[ "type" ]
AbstractBeanDefinition
true
4
8.08
spring-projects/spring-framework
59,386
javadoc
false
paired_cosine_distances
def paired_cosine_distances(X, Y): """ Compute the paired cosine distances between X and Y. Read more in the :ref:`User Guide <metrics>`. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) An array where each row is a sample and each column is a feature. Y : {array-like, sparse matrix} of shape (n_samples, n_features) An array where each row is a sample and each column is a feature. Returns ------- distances : ndarray of shape (n_samples,) Returns the distances between the row vectors of `X` and the row vectors of `Y`, where `distances[i]` is the distance between `X[i]` and `Y[i]`. Notes ----- The cosine distance is equivalent to the half the squared euclidean distance if each sample is normalized to unit norm. Examples -------- >>> from sklearn.metrics.pairwise import paired_cosine_distances >>> X = [[0, 0, 0], [1, 1, 1]] >>> Y = [[1, 0, 0], [1, 1, 0]] >>> paired_cosine_distances(X, Y) array([0.5 , 0.184]) """ X, Y = check_paired_arrays(X, Y) return 0.5 * row_norms(normalize(X) - normalize(Y), squared=True)
Compute the paired cosine distances between X and Y. Read more in the :ref:`User Guide <metrics>`. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) An array where each row is a sample and each column is a feature. Y : {array-like, sparse matrix} of shape (n_samples, n_features) An array where each row is a sample and each column is a feature. Returns ------- distances : ndarray of shape (n_samples,) Returns the distances between the row vectors of `X` and the row vectors of `Y`, where `distances[i]` is the distance between `X[i]` and `Y[i]`. Notes ----- The cosine distance is equivalent to the half the squared euclidean distance if each sample is normalized to unit norm. Examples -------- >>> from sklearn.metrics.pairwise import paired_cosine_distances >>> X = [[0, 0, 0], [1, 1, 1]] >>> Y = [[1, 0, 0], [1, 1, 0]] >>> paired_cosine_distances(X, Y) array([0.5 , 0.184])
python
sklearn/metrics/pairwise.py
1,266
[ "X", "Y" ]
false
1
6.48
scikit-learn/scikit-learn
64,340
numpy
false
unmodifiableValueCollection
private static <V extends @Nullable Object> Collection<V> unmodifiableValueCollection( Collection<V> collection) { if (collection instanceof SortedSet) { return Collections.unmodifiableSortedSet((SortedSet<V>) collection); } else if (collection instanceof Set) { return Collections.unmodifiableSet((Set<V>) collection); } else if (collection instanceof List) { return Collections.unmodifiableList((List<V>) collection); } return Collections.unmodifiableCollection(collection); }
Returns an unmodifiable view of the specified collection, preserving the interface for instances of {@code SortedSet}, {@code Set}, {@code List} and {@code Collection}, in that order of preference. @param collection the collection for which to return an unmodifiable view @return an unmodifiable view of the collection
java
android/guava/src/com/google/common/collect/Multimaps.java
1,031
[ "collection" ]
true
4
7.6
google/guava
51,352
javadoc
false
getAutowireCapableBeanFactory
@Override public AutowireCapableBeanFactory getAutowireCapableBeanFactory() throws IllegalStateException { assertBeanFactoryActive(); return this.beanFactory; }
Return the underlying bean factory of this context, available for registering bean definitions. <p><b>NOTE:</b> You need to call {@link #refresh()} to initialize the bean factory and its contained beans with application context semantics (autodetecting BeanFactoryPostProcessors, etc). @return the internal bean factory (as DefaultListableBeanFactory)
java
spring-context/src/main/java/org/springframework/context/support/GenericApplicationContext.java
334
[]
AutowireCapableBeanFactory
true
1
6.08
spring-projects/spring-framework
59,386
javadoc
false
escapeRegExp
function escapeRegExp(string) { string = toString(string); return (string && reHasRegExpChar.test(string)) ? string.replace(reRegExpChar, '\\$&') : string; }
Escapes the `RegExp` special characters "^", "$", "\", ".", "*", "+", "?", "(", ")", "[", "]", "{", "}", and "|" in `string`. @static @memberOf _ @since 3.0.0 @category String @param {string} [string=''] The string to escape. @returns {string} Returns the escaped string. @example _.escapeRegExp('[lodash](https://lodash.com/)'); // => '\[lodash\]\(https://lodash\.com/\)'
javascript
lodash.js
14,380
[ "string" ]
false
3
7.04
lodash/lodash
61,490
jsdoc
false
alterStreamsGroupOffsets
default AlterStreamsGroupOffsetsResult alterStreamsGroupOffsets(String groupId, Map<TopicPartition, OffsetAndMetadata> offsets) { return alterStreamsGroupOffsets(groupId, offsets, new AlterStreamsGroupOffsetsOptions()); }
<p>Alters offsets for the specified group. In order to succeed, the group must be empty. <p>This is a convenience method for {@link #alterStreamsGroupOffsets(String, Map, AlterStreamsGroupOffsetsOptions)} with default options. See the overload for more details. @param groupId The group for which to alter offsets. @param offsets A map of offsets by partition with associated metadata. @return The AlterOffsetsResult.
java
clients/src/main/java/org/apache/kafka/clients/admin/Admin.java
1,305
[ "groupId", "offsets" ]
AlterStreamsGroupOffsetsResult
true
1
6.32
apache/kafka
31,560
javadoc
false
_get_func_name
def _get_func_name(func): """Get function full name. Parameters ---------- func : callable The function object. Returns ------- name : str The function name. """ parts = [] module = inspect.getmodule(func) if module: parts.append(module.__name__) qualname = func.__qualname__ if qualname != func.__name__: parts.append(qualname[: qualname.find(".")]) parts.append(func.__name__) return ".".join(parts)
Get function full name. Parameters ---------- func : callable The function object. Returns ------- name : str The function name.
python
sklearn/utils/_testing.py
453
[ "func" ]
false
3
6.08
scikit-learn/scikit-learn
64,340
numpy
false
xContentType
@Deprecated public static XContentType xContentType(byte[] bytes, int offset, int length) { int totalLength = bytes.length; if (totalLength == 0 || length == 0) { return null; } else if ((offset + length) > totalLength) { return null; } byte first = bytes[offset]; if (JsonXContent.jsonXContent.detectContent(bytes, offset, length)) { return XContentType.JSON; } if (SmileXContent.smileXContent.detectContent(bytes, offset, length)) { return XContentType.SMILE; } if (YamlXContent.yamlXContent.detectContent(bytes, offset, length)) { return XContentType.YAML; } if (CborXContent.cborXContent.detectContent(bytes, offset, length)) { return XContentType.CBOR; } // fallback for JSON int jsonStart = 0; // JSON may be preceded by UTF-8 BOM if (length > 3 && first == (byte) 0xEF && bytes[offset + 1] == (byte) 0xBB && bytes[offset + 2] == (byte) 0xBF) { jsonStart = 3; } // a last chance for JSON for (int i = jsonStart; i < length; i++) { byte b = bytes[offset + i]; if (b == '{') { return XContentType.JSON; } if (Character.isWhitespace(b) == false) { break; } } return null; }
Guesses the content type based on the provided bytes. @deprecated the content type should not be guessed except for few cases where we effectively don't know the content type. The REST layer should move to reading the Content-Type header instead. There are other places where auto-detection may be needed. This method is deprecated to prevent usages of it from spreading further without specific reasons.
java
libs/x-content/src/main/java/org/elasticsearch/xcontent/XContentFactory.java
278
[ "bytes", "offset", "length" ]
XContentType
true
15
6
elastic/elasticsearch
75,680
javadoc
false