function_name
stringlengths 1
57
| function_code
stringlengths 20
4.99k
| documentation
stringlengths 50
2k
| language
stringclasses 5
values | file_path
stringlengths 8
166
| line_number
int32 4
16.7k
| parameters
listlengths 0
20
| return_type
stringlengths 0
131
| has_type_hints
bool 2
classes | complexity
int32 1
51
| quality_score
float32 6
9.68
| repo_name
stringclasses 34
values | repo_stars
int32 2.9k
242k
| docstring_style
stringclasses 7
values | is_async
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
applyAsInt
|
int applyAsInt(long value) throws E;
|
Applies this function to the given argument.
@param value the function argument
@return the function result
@throws E Thrown when the function fails.
|
java
|
src/main/java/org/apache/commons/lang3/function/FailableLongToIntFunction.java
| 53
|
[
"value"
] | true
| 1
| 6.8
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
getPropertyAccessorForPropertyPath
|
protected AbstractNestablePropertyAccessor getPropertyAccessorForPropertyPath(String propertyPath) {
int pos = PropertyAccessorUtils.getFirstNestedPropertySeparatorIndex(propertyPath);
// Handle nested properties recursively.
if (pos > -1) {
String nestedProperty = propertyPath.substring(0, pos);
String nestedPath = propertyPath.substring(pos + 1);
AbstractNestablePropertyAccessor nestedPa = getNestedPropertyAccessor(nestedProperty);
return nestedPa.getPropertyAccessorForPropertyPath(nestedPath);
}
else {
return this;
}
}
|
Recursively navigate to return a property accessor for the nested property path.
@param propertyPath property path, which may be nested
@return a property accessor for the target bean
|
java
|
spring-beans/src/main/java/org/springframework/beans/AbstractNestablePropertyAccessor.java
| 816
|
[
"propertyPath"
] |
AbstractNestablePropertyAccessor
| true
| 2
| 7.76
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
toProxyConfigString
|
String toProxyConfigString();
|
As {@code toString()} will normally be delegated to the target,
this returns the equivalent for the AOP proxy.
@return a string description of the proxy configuration
|
java
|
spring-aop/src/main/java/org/springframework/aop/framework/Advised.java
| 233
|
[] |
String
| true
| 1
| 6.32
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
initializeReportSignalHandlers
|
function initializeReportSignalHandlers() {
if (getOptionValue('--report-on-signal')) {
const { addSignalHandler } = require('internal/process/report');
addSignalHandler();
}
}
|
Patch the process object with legacy properties and normalizations.
Replace `process.argv[0]` with `process.execPath`, preserving the original `argv[0]` value as `process.argv0`.
Replace `process.argv[1]` with the resolved absolute file path of the entry point, if found.
@param {boolean} expandArgv1 - Whether to replace `process.argv[1]` with the resolved absolute file path of
the main entry point.
@returns {string}
|
javascript
|
lib/internal/process/pre_execution.js
| 452
|
[] | false
| 2
| 6.8
|
nodejs/node
| 114,839
|
jsdoc
| false
|
|
get
|
@Override
public final T get() throws ConcurrentException {
T result;
while ((result = reference.get()) == getNoInit()) {
if (factory.compareAndSet(null, this)) {
try {
reference.set(initialize());
} catch (final Throwable t) {
// Allow retry on failure; otherwise callers spin forever.
factory.set(null);
// Rethrow preserving original semantics: unchecked as-is, checked wrapped.
final Throwable checked = ExceptionUtils.throwUnchecked(t);
throw checked instanceof ConcurrentException ? (ConcurrentException) checked : new ConcurrentException(checked);
}
}
}
return result;
}
|
Gets (and initialize, if not initialized yet) the required object.
@return lazily initialized object.
@throws ConcurrentException if the initialization of the object causes an exception.
|
java
|
src/main/java/org/apache/commons/lang3/concurrent/AtomicSafeInitializer.java
| 126
|
[] |
T
| true
| 5
| 7.6
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
hashCode
|
@Override
public int hashCode() {
int hashCode = 1;
int n = size();
for (int i = 0; i < n; i++) {
hashCode = 31 * hashCode + get(i).hashCode();
hashCode = ~~hashCode;
// needed to deal with GWT integer overflow
}
return hashCode;
}
|
Returns a view of this immutable list in reverse order. For example, {@code ImmutableList.of(1,
2, 3).reverse()} is equivalent to {@code ImmutableList.of(3, 2, 1)}.
@return a view of this immutable list in reverse order
@since 7.0
|
java
|
android/guava/src/com/google/common/collect/ImmutableList.java
| 677
|
[] | true
| 2
| 8.24
|
google/guava
| 51,352
|
javadoc
| false
|
|
compareTo
|
@Override
public int compareTo(TimeValue timeValue) {
double thisValue = ((double) duration) * timeUnit.toNanos(1);
double otherValue = ((double) timeValue.duration) * timeValue.timeUnit.toNanos(1);
return Double.compare(thisValue, otherValue);
}
|
@param sValue Value to parse, which may be {@code null}.
@param defaultValue Value to return if {@code sValue} is {@code null}.
@param settingName Name of the parameter or setting. On invalid input, this value is included in the exception message. Otherwise,
this parameter is unused.
@return The {@link TimeValue} which the input string represents, or {@code defaultValue} if the input is {@code null}.
|
java
|
libs/core/src/main/java/org/elasticsearch/core/TimeValue.java
| 452
|
[
"timeValue"
] | true
| 1
| 6.88
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
|
andThen
|
default FailableByteConsumer<E> andThen(final FailableByteConsumer<E> after) {
Objects.requireNonNull(after);
return (final byte t) -> {
accept(t);
after.accept(t);
};
}
|
Returns a composed {@link FailableByteConsumer} like {@link IntConsumer#andThen(IntConsumer)}.
@param after the operation to perform after this one.
@return a composed {@link FailableByteConsumer} like {@link IntConsumer#andThen(IntConsumer)}.
@throws NullPointerException if {@code after} is null
|
java
|
src/main/java/org/apache/commons/lang3/function/FailableByteConsumer.java
| 62
|
[
"after"
] | true
| 1
| 6.24
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
get_instances
|
def get_instances(self, filters: list | None = None, instance_ids: list | None = None) -> list:
"""
Get list of instance details, optionally applying filters and selective instance ids.
:param instance_ids: List of ids to get instances for
:param filters: List of filters to specify instances to get
:return: List of instances
"""
description = self.describe_instances(filters=filters, instance_ids=instance_ids)
return [
instance for reservation in description["Reservations"] for instance in reservation["Instances"]
]
|
Get list of instance details, optionally applying filters and selective instance ids.
:param instance_ids: List of ids to get instances for
:param filters: List of filters to specify instances to get
:return: List of instances
|
python
|
providers/amazon/src/airflow/providers/amazon/aws/hooks/ec2.py
| 151
|
[
"self",
"filters",
"instance_ids"
] |
list
| true
| 1
| 6.88
|
apache/airflow
| 43,597
|
sphinx
| false
|
createConcurrentMapCache
|
protected Cache createConcurrentMapCache(String name) {
SerializationDelegate actualSerialization = (isStoreByValue() ? this.serialization : null);
return new ConcurrentMapCache(name, new ConcurrentHashMap<>(256), isAllowNullValues(), actualSerialization);
}
|
Create a new ConcurrentMapCache instance for the specified cache name.
@param name the name of the cache
@return the ConcurrentMapCache (or a decorator thereof)
|
java
|
spring-context/src/main/java/org/springframework/cache/concurrent/ConcurrentMapCacheManager.java
| 215
|
[
"name"
] |
Cache
| true
| 2
| 7.68
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
toClass
|
public static Class<?>[] toClass(final Object... array) {
if (array == null) {
return null;
}
if (array.length == 0) {
return ArrayUtils.EMPTY_CLASS_ARRAY;
}
return ArrayUtils.setAll(new Class[array.length], i -> array[i] == null ? null : array[i].getClass());
}
|
Converts an array of {@link Object} in to an array of {@link Class} objects. If any of these objects is null, a null element will be inserted into the
array.
<p>
This method returns {@code null} for a {@code null} input array.
</p>
@param array an {@link Object} array.
@return a {@link Class} array, {@code null} if null array input.
@since 2.4
|
java
|
src/main/java/org/apache/commons/lang3/ClassUtils.java
| 1,566
|
[] | true
| 4
| 8.24
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
enhanceStackTrace
|
function enhanceStackTrace(err, own) {
let ctorInfo = '';
try {
const { name } = this.constructor;
if (name !== 'EventEmitter')
ctorInfo = ` on ${name} instance`;
} catch {
// Continue regardless of error.
}
const sep = `\nEmitted 'error' event${ctorInfo} at:\n`;
const errStack = ArrayPrototypeSlice(
StringPrototypeSplit(err.stack, '\n'), 1);
const ownStack = ArrayPrototypeSlice(
StringPrototypeSplit(own.stack, '\n'), 1);
const { len, offset } = identicalSequenceRange(ownStack, errStack);
if (len > 0) {
ArrayPrototypeSplice(ownStack, offset + 1, len - 2,
' [... lines matching original stack trace ...]');
}
return err.stack + sep + ArrayPrototypeJoin(ownStack, '\n');
}
|
Returns the current max listener value for the event emitter.
@returns {number}
|
javascript
|
lib/events.js
| 423
|
[
"err",
"own"
] | false
| 4
| 6.08
|
nodejs/node
| 114,839
|
jsdoc
| false
|
|
getTarget
|
public Object getTarget() {
if (this.cacheKeyGenerator != null) {
return this.cacheKeyGenerator;
}
Assert.state(this.keyGenerator != null, "No key generator");
return this.keyGenerator;
}
|
Return the target key generator to use in the form of either a {@link KeyGenerator}
or a {@link CacheKeyGenerator}.
|
java
|
spring-context-support/src/main/java/org/springframework/cache/jcache/interceptor/KeyGeneratorAdapter.java
| 78
|
[] |
Object
| true
| 2
| 6.56
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
handle_removed_files
|
def handle_removed_files(self, known_files: dict[str, set[DagFileInfo]]):
"""
Remove from data structures the files that are missing.
Also, terminate processes that may be running on those removed files.
:param known_files: structure containing known files per-bundle
:return: None
"""
files_set: set[DagFileInfo] = set()
"""Set containing all observed files.
We consolidate to one set for performance.
"""
for v in known_files.values():
files_set |= v
self.purge_removed_files_from_queue(present=files_set)
self.terminate_orphan_processes(present=files_set)
self.remove_orphaned_file_stats(present=files_set)
|
Remove from data structures the files that are missing.
Also, terminate processes that may be running on those removed files.
:param known_files: structure containing known files per-bundle
:return: None
|
python
|
airflow-core/src/airflow/dag_processing/manager.py
| 786
|
[
"self",
"known_files"
] | true
| 2
| 7.76
|
apache/airflow
| 43,597
|
sphinx
| false
|
|
_safe_split
|
def _safe_split(estimator, X, y, indices, train_indices=None):
"""Create subset of dataset and properly handle kernels.
Slice X, y according to indices for cross-validation, but take care of
precomputed kernel-matrices or pairwise affinities / distances.
If ``estimator._pairwise is True``, X needs to be square and
we slice rows and columns. If ``train_indices`` is not None,
we slice rows using ``indices`` (assumed the test set) and columns
using ``train_indices``, indicating the training set.
Labels y will always be indexed only along the first axis.
Parameters
----------
estimator : object
Estimator to determine whether we should slice only rows or rows and
columns.
X : array-like, sparse matrix or iterable
Data to be indexed. If ``estimator._pairwise is True``,
this needs to be a square array-like or sparse matrix.
y : array-like, sparse matrix or iterable
Targets to be indexed.
indices : array of int
Rows to select from X and y.
If ``estimator._pairwise is True`` and ``train_indices is None``
then ``indices`` will also be used to slice columns.
train_indices : array of int or None, default=None
If ``estimator._pairwise is True`` and ``train_indices is not None``,
then ``train_indices`` will be use to slice the columns of X.
Returns
-------
X_subset : array-like, sparse matrix or list
Indexed data.
y_subset : array-like, sparse matrix or list
Indexed targets.
"""
if get_tags(estimator).input_tags.pairwise:
if not hasattr(X, "shape"):
raise ValueError(
"Precomputed kernels or affinity matrices have "
"to be passed as arrays or sparse matrices."
)
# X is a precomputed square kernel matrix
if X.shape[0] != X.shape[1]:
raise ValueError("X should be a square kernel matrix")
if train_indices is None:
X_subset = X[np.ix_(indices, indices)]
else:
X_subset = X[np.ix_(indices, train_indices)]
else:
X_subset = _safe_indexing(X, indices)
if y is not None:
y_subset = _safe_indexing(y, indices)
else:
y_subset = None
return X_subset, y_subset
|
Create subset of dataset and properly handle kernels.
Slice X, y according to indices for cross-validation, but take care of
precomputed kernel-matrices or pairwise affinities / distances.
If ``estimator._pairwise is True``, X needs to be square and
we slice rows and columns. If ``train_indices`` is not None,
we slice rows using ``indices`` (assumed the test set) and columns
using ``train_indices``, indicating the training set.
Labels y will always be indexed only along the first axis.
Parameters
----------
estimator : object
Estimator to determine whether we should slice only rows or rows and
columns.
X : array-like, sparse matrix or iterable
Data to be indexed. If ``estimator._pairwise is True``,
this needs to be a square array-like or sparse matrix.
y : array-like, sparse matrix or iterable
Targets to be indexed.
indices : array of int
Rows to select from X and y.
If ``estimator._pairwise is True`` and ``train_indices is None``
then ``indices`` will also be used to slice columns.
train_indices : array of int or None, default=None
If ``estimator._pairwise is True`` and ``train_indices is not None``,
then ``train_indices`` will be use to slice the columns of X.
Returns
-------
X_subset : array-like, sparse matrix or list
Indexed data.
y_subset : array-like, sparse matrix or list
Indexed targets.
|
python
|
sklearn/utils/metaestimators.py
| 112
|
[
"estimator",
"X",
"y",
"indices",
"train_indices"
] | false
| 9
| 6
|
scikit-learn/scikit-learn
| 64,340
|
numpy
| false
|
|
defineLabel
|
function defineLabel(): Label {
if (!labelOffsets) {
labelOffsets = [];
}
const label = nextLabelId;
nextLabelId++;
labelOffsets[label] = -1;
return label;
}
|
Defines a label, uses as the target of a Break operation.
|
typescript
|
src/compiler/transformers/generators.ts
| 2,109
|
[] | true
| 2
| 6.88
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
_get_leaf_sorter
|
def _get_leaf_sorter(labels: list[np.ndarray]) -> npt.NDArray[np.intp]:
"""
Returns sorter for the inner most level while preserving the
order of higher levels.
Parameters
----------
labels : list[np.ndarray]
Each ndarray has signed integer dtype, not necessarily identical.
Returns
-------
np.ndarray[np.intp]
"""
if labels[0].size == 0:
return np.empty(0, dtype=np.intp)
if len(labels) == 1:
return get_group_index_sorter(ensure_platform_int(labels[0]))
# find indexers of beginning of each set of
# same-key labels w.r.t all but last level
tic = labels[0][:-1] != labels[0][1:]
for lab in labels[1:-1]:
tic |= lab[:-1] != lab[1:]
starts = np.hstack(([True], tic, [True])).nonzero()[0]
lab = ensure_int64(labels[-1])
return lib.get_level_sorter(lab, ensure_platform_int(starts))
|
Returns sorter for the inner most level while preserving the
order of higher levels.
Parameters
----------
labels : list[np.ndarray]
Each ndarray has signed integer dtype, not necessarily identical.
Returns
-------
np.ndarray[np.intp]
|
python
|
pandas/core/indexes/base.py
| 4,721
|
[
"labels"
] |
npt.NDArray[np.intp]
| true
| 4
| 6.4
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
from
|
public static <V> CacheLoader<Object, V> from(Supplier<V> supplier) {
return new SupplierToCacheLoader<>(supplier);
}
|
Returns a cache loader based on an <i>existing</i> supplier instance. Note that there's no need
to create a <i>new</i> supplier just to pass it in here; just subclass {@code CacheLoader} and
implement {@link #load load} instead.
<p>The returned object is serializable if {@code supplier} is serializable.
@param supplier the supplier to be used for loading values; must never return {@code null}
@return a cache loader that loads values by calling {@link Supplier#get}, irrespective of the
key
|
java
|
android/guava/src/com/google/common/cache/CacheLoader.java
| 156
|
[
"supplier"
] | true
| 1
| 6.64
|
google/guava
| 51,352
|
javadoc
| false
|
|
generate_custom_op_choices
|
def generate_custom_op_choices(
self,
name: str,
decompositions: list[Callable[..., Any]],
input_nodes: list[Buffer],
non_tensor_args: list[dict[str, Any]],
default_impl: Callable[..., Any] | None = None,
input_gen_fns: dict[int, Callable[[Any], torch.Tensor]] | None = None,
) -> list[SubgraphChoiceCaller]:
"""
Generate multiple SubgraphChoiceCaller instances for custom op autotuning.
This method extends SubgraphTemplate to support custom op decompositions,
allowing multiple implementations to compete in autotuning.
Args:
name: Base name for the choices
decompositions: List of decomposition functions to compete in autotuning
input_nodes: List of tensor inputs. All tensor arguments must be passed here.
non_tensor_args: List of non-tensor kwargs only, one dict per corresponding decomposition.
default_impl: Default implementation for layout inference
input_gen_fns: Optional dict mapping input indices to tensor generators
Returns:
List of SubgraphChoiceCaller instances for autotuning
"""
if not decompositions:
return []
assert len(decompositions) == len(non_tensor_args), (
f"decompositions and non_tensor_args must have same length, "
f"got {len(decompositions)} decompositions and {len(non_tensor_args)} kwargs"
)
# Infer layouts and ensure layout consistency for fair autotuning comparison
layouts = [
self._infer_custom_op_layout(
input_nodes, decomp, kwargs, default_impl, input_gen_fns
)
for decomp, kwargs in zip(decompositions, non_tensor_args)
]
# Validate all decompositions produce equivalent layouts for fair comparison
self._validate_layout_equivalence(name, decompositions, layouts)
layout = layouts[0] # All layouts are now validated to be equivalent
choices: list[SubgraphChoiceCaller] = []
for decomp, decomp_kwargs in zip(decompositions, non_tensor_args):
# Create make_fx_graph function for this decomposition
import functools
def make_fx_graph(
*args: Any,
decomp: Callable[..., Any] = decomp,
decomp_kwargs: dict[str, Any] = decomp_kwargs,
) -> Any:
# decomp_kwargs contains all merged parameters: CustomOpConfig params + runtime kwargs
from torch.fx.experimental.proxy_tensor import make_fx
from ..decomposition import select_decomp_table
decomposition_table = select_decomp_table()
return make_fx(
functools.partial(decomp, **decomp_kwargs),
decomposition_table=decomposition_table,
)(*args)
# Generate descriptive name for this variant
variant_name = self._generate_variant_name(decomp, decomp_kwargs)
choice = self.generate(
name=f"{name}_{variant_name}",
input_nodes=input_nodes,
layout=layout,
make_fx_graph=make_fx_graph,
description=f"CustomOp {decomp.__name__}",
input_gen_fns=input_gen_fns,
)
# Cache decomposition info for range-based dispatch
choice.cache_decomposition(decomp, decomp_kwargs)
choices.append(choice)
return choices
|
Generate multiple SubgraphChoiceCaller instances for custom op autotuning.
This method extends SubgraphTemplate to support custom op decompositions,
allowing multiple implementations to compete in autotuning.
Args:
name: Base name for the choices
decompositions: List of decomposition functions to compete in autotuning
input_nodes: List of tensor inputs. All tensor arguments must be passed here.
non_tensor_args: List of non-tensor kwargs only, one dict per corresponding decomposition.
default_impl: Default implementation for layout inference
input_gen_fns: Optional dict mapping input indices to tensor generators
Returns:
List of SubgraphChoiceCaller instances for autotuning
|
python
|
torch/_inductor/codegen/subgraph.py
| 282
|
[
"self",
"name",
"decompositions",
"input_nodes",
"non_tensor_args",
"default_impl",
"input_gen_fns"
] |
list[SubgraphChoiceCaller]
| true
| 3
| 7.52
|
pytorch/pytorch
| 96,034
|
google
| false
|
equals
|
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
return this.compareTo(((TimeValue) o)) == 0;
}
|
@param sValue Value to parse, which may be {@code null}.
@param defaultValue Value to return if {@code sValue} is {@code null}.
@param settingName Name of the parameter or setting. On invalid input, this value is included in the exception message. Otherwise,
this parameter is unused.
@return The {@link TimeValue} which the input string represents, or {@code defaultValue} if the input is {@code null}.
|
java
|
libs/core/src/main/java/org/elasticsearch/core/TimeValue.java
| 435
|
[
"o"
] | true
| 4
| 8.08
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
|
holder
|
private Holder holder() {
if (holder == null) {
synchronized (data) {
if (holder == null)
holder = new Holder(data);
}
}
return holder;
}
|
Returns a 32-bit bitfield to represent authorized operations for this cluster.
|
java
|
clients/src/main/java/org/apache/kafka/common/requests/MetadataResponse.java
| 215
|
[] |
Holder
| true
| 3
| 6.4
|
apache/kafka
| 31,560
|
javadoc
| false
|
baseIndexOf
|
function baseIndexOf(array, value, fromIndex) {
return value === value
? strictIndexOf(array, value, fromIndex)
: baseFindIndex(array, baseIsNaN, fromIndex);
}
|
The base implementation of `_.indexOf` without `fromIndex` bounds checks.
@private
@param {Array} array The array to inspect.
@param {*} value The value to search for.
@param {number} fromIndex The index to search from.
@returns {number} Returns the index of the matched value, else `-1`.
|
javascript
|
lodash.js
| 832
|
[
"array",
"value",
"fromIndex"
] | false
| 2
| 6.08
|
lodash/lodash
| 61,490
|
jsdoc
| false
|
|
compose
|
default FailableIntUnaryOperator<E> compose(final FailableIntUnaryOperator<E> before) {
Objects.requireNonNull(before);
return (final int v) -> applyAsInt(before.applyAsInt(v));
}
|
Returns a composed {@link FailableIntUnaryOperator} like {@link IntUnaryOperator#compose(IntUnaryOperator)}.
@param before the operator to apply before this one.
@return a composed {@link FailableIntUnaryOperator} like {@link IntUnaryOperator#compose(IntUnaryOperator)}.
@throws NullPointerException if before is null.
@see #andThen(FailableIntUnaryOperator)
|
java
|
src/main/java/org/apache/commons/lang3/function/FailableIntUnaryOperator.java
| 86
|
[
"before"
] | true
| 1
| 6
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
readUTF
|
@CanIgnoreReturnValue // to skip a field
@Override
public String readUTF() throws IOException {
return new DataInputStream(in).readUTF();
}
|
Reads a {@code double} as specified by {@link DataInputStream#readDouble()}, except using
little-endian byte order.
@return the next eight bytes of the input stream, interpreted as a {@code double} in
little-endian byte order
@throws IOException if an I/O error occurs
|
java
|
android/guava/src/com/google/common/io/LittleEndianDataInputStream.java
| 176
|
[] |
String
| true
| 1
| 6.4
|
google/guava
| 51,352
|
javadoc
| false
|
toLong
|
public static long toLong(final String str, final long defaultValue) {
try {
return Long.parseLong(str);
} catch (final RuntimeException e) {
return defaultValue;
}
}
|
Converts a {@link String} to a {@code long}, returning a default value if the conversion fails.
<p>
If the string is {@code null}, the default value is returned.
</p>
<pre>
NumberUtils.toLong(null, 1L) = 1L
NumberUtils.toLong("", 1L) = 1L
NumberUtils.toLong("1", 0L) = 1L
</pre>
@param str the string to convert, may be null.
@param defaultValue the default value.
@return the long represented by the string, or the default if conver sion fails.
@since 2.1
|
java
|
src/main/java/org/apache/commons/lang3/math/NumberUtils.java
| 1,609
|
[
"str",
"defaultValue"
] | true
| 2
| 8.08
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
construct_strides
|
def construct_strides(
sizes: Sequence[_IntLike],
fill_order: Sequence[int],
) -> Sequence[_IntLike]:
"""From a list of sizes and a fill order, construct the strides of the permuted tensor."""
# Initialize strides
assert len(sizes) == len(fill_order), (
"Length of sizes must match the length of the fill order"
)
strides: list[_IntLike] = [0] * len(sizes)
# Start with stride 1 for the innermost dimension
current_stride: _IntLike = 1
# Iterate through the fill order populating strides
for dim in fill_order:
strides[dim] = current_stride
current_stride *= sizes[dim]
return strides
|
From a list of sizes and a fill order, construct the strides of the permuted tensor.
|
python
|
torch/_inductor/kernel/flex/common.py
| 220
|
[
"sizes",
"fill_order"
] |
Sequence[_IntLike]
| true
| 2
| 6
|
pytorch/pytorch
| 96,034
|
unknown
| false
|
withHashes
|
public StandardStackTracePrinter withHashes() {
return withHashes(true);
}
|
Return a new {@link StandardStackTracePrinter} from this one that generates and
prints hashes for each stacktrace.
@return a new {@link StandardStackTracePrinter} instance
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/logging/StandardStackTracePrinter.java
| 262
|
[] |
StandardStackTracePrinter
| true
| 1
| 6.16
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
takeWhile
|
function takeWhile(array, predicate) {
return (array && array.length)
? baseWhile(array, getIteratee(predicate, 3))
: [];
}
|
Creates a slice of `array` with elements taken from the beginning. Elements
are taken until `predicate` returns falsey. The predicate is invoked with
three arguments: (value, index, array).
@static
@memberOf _
@since 3.0.0
@category Array
@param {Array} array The array to query.
@param {Function} [predicate=_.identity] The function invoked per iteration.
@returns {Array} Returns the slice of `array`.
@example
var users = [
{ 'user': 'barney', 'active': false },
{ 'user': 'fred', 'active': false },
{ 'user': 'pebbles', 'active': true }
];
_.takeWhile(users, function(o) { return !o.active; });
// => objects for ['barney', 'fred']
// The `_.matches` iteratee shorthand.
_.takeWhile(users, { 'user': 'barney', 'active': false });
// => objects for ['barney']
// The `_.matchesProperty` iteratee shorthand.
_.takeWhile(users, ['active', false]);
// => objects for ['barney', 'fred']
// The `_.property` iteratee shorthand.
_.takeWhile(users, 'active');
// => []
|
javascript
|
lodash.js
| 8,391
|
[
"array",
"predicate"
] | false
| 3
| 7.2
|
lodash/lodash
| 61,490
|
jsdoc
| false
|
|
commitOffsetsSync
|
public boolean commitOffsetsSync(Map<TopicPartition, OffsetAndMetadata> offsets, Timer timer) {
invokeCompletedOffsetCommitCallbacks();
if (offsets.isEmpty()) {
// We guarantee that the callbacks for all commitAsync() will be invoked when
// commitSync() completes, even if the user tries to commit empty offsets.
return invokePendingAsyncCommits(timer);
}
long attempts = 0L;
do {
if (coordinatorUnknownAndUnreadySync(timer)) {
return false;
}
RequestFuture<Void> future = sendOffsetCommitRequest(offsets);
client.poll(future, timer);
// We may have had in-flight offset commits when the synchronous commit began. If so, ensure that
// the corresponding callbacks are invoked prior to returning in order to preserve the order that
// the offset commits were applied.
invokeCompletedOffsetCommitCallbacks();
if (future.succeeded()) {
if (interceptors != null)
interceptors.onCommit(offsets);
return true;
}
if (future.failed() && !future.isRetriable())
throw future.exception();
timer.sleep(retryBackoff.backoff(attempts++));
} while (timer.notExpired());
return false;
}
|
Commit offsets synchronously. This method will retry until the commit completes successfully
or an unrecoverable error is encountered.
@param offsets The offsets to be committed
@throws org.apache.kafka.common.errors.AuthorizationException if the consumer is not authorized to the group
or to any of the specified partitions. See the exception for more details
@throws CommitFailedException if an unrecoverable error occurs before the commit can be completed
@throws FencedInstanceIdException if a static member gets fenced
@return If the offset commit was successfully sent and a successful response was received from
the coordinator
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerCoordinator.java
| 1,142
|
[
"offsets",
"timer"
] | true
| 7
| 7.6
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
to_dict
|
def to_dict(cls, var: Any) -> dict:
"""Stringifies DAGs and operators contained by var and returns a dict of var."""
# Clear any cached client_defaults to ensure fresh generation for this DAG
# Clear lru_cache for client defaults
SerializedBaseOperator.generate_client_defaults.cache_clear()
json_dict = {"__version": cls.SERIALIZER_VERSION, "dag": cls.serialize_dag(var)}
# Add client_defaults section with only values that differ from schema defaults
# for tasks
client_defaults = SerializedBaseOperator.generate_client_defaults()
if client_defaults:
json_dict["client_defaults"] = {"tasks": client_defaults}
# Validate Serialized DAG with Json Schema. Raises Error if it mismatches
cls.validate_schema(json_dict)
return json_dict
|
Stringifies DAGs and operators contained by var and returns a dict of var.
|
python
|
airflow-core/src/airflow/serialization/serialized_objects.py
| 2,593
|
[
"cls",
"var"
] |
dict
| true
| 2
| 6
|
apache/airflow
| 43,597
|
unknown
| false
|
__setitem__
|
def __setitem__(self, key, value) -> None:
"""
Set one or more values inplace.
This method is not required to satisfy the pandas extension array
interface.
Parameters
----------
key : int, ndarray, or slice
When called from, e.g. ``Series.__setitem__``, ``key`` will be
one of
* scalar int
* ndarray of integers.
* boolean ndarray
* slice object
value : ExtensionDtype.type, Sequence[ExtensionDtype.type], or object
value or values to be set of ``key``.
Returns
-------
None
Raises
------
ValueError
If the array is readonly and modification is attempted.
"""
# Some notes to the ExtensionArray implementer who may have ended up
# here. While this method is not required for the interface, if you
# *do* choose to implement __setitem__, then some semantics should be
# observed:
#
# * Setting multiple values : ExtensionArrays should support setting
# multiple values at once, 'key' will be a sequence of integers and
# 'value' will be a same-length sequence.
#
# * Broadcasting : For a sequence 'key' and a scalar 'value',
# each position in 'key' should be set to 'value'.
#
# * Coercion : Most users will expect basic coercion to work. For
# example, a string like '2018-01-01' is coerced to a datetime
# when setting on a datetime64ns array. In general, if the
# __init__ method coerces that value, then so should __setitem__
# Note, also, that Series/DataFrame.where internally use __setitem__
# on a copy of the data.
# Check if the array is readonly
if self._readonly:
raise ValueError("Cannot modify read-only array")
raise NotImplementedError(f"{type(self)} does not implement __setitem__.")
|
Set one or more values inplace.
This method is not required to satisfy the pandas extension array
interface.
Parameters
----------
key : int, ndarray, or slice
When called from, e.g. ``Series.__setitem__``, ``key`` will be
one of
* scalar int
* ndarray of integers.
* boolean ndarray
* slice object
value : ExtensionDtype.type, Sequence[ExtensionDtype.type], or object
value or values to be set of ``key``.
Returns
-------
None
Raises
------
ValueError
If the array is readonly and modification is attempted.
|
python
|
pandas/core/arrays/base.py
| 493
|
[
"self",
"key",
"value"
] |
None
| true
| 2
| 6.88
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
coordinator
|
public Optional<Node> coordinator() {
return Optional.ofNullable(this.coordinator);
}
|
Returns the current coordinator node.
@return the current coordinator node.
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/CoordinatorRequestManager.java
| 252
|
[] | true
| 1
| 6.32
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
fetchRecords
|
<K, V> ShareInFlightBatch<K, V> fetchRecords(final Deserializers<K, V> deserializers,
final int maxRecords,
final boolean checkCrcs) {
// Creating an empty ShareInFlightBatch
ShareInFlightBatch<K, V> inFlightBatch = new ShareInFlightBatch<>(nodeId, partition, acquisitionLockTimeoutMs);
if (cachedBatchException != null) {
// If the event that a CRC check fails, reject the entire record batch because it is corrupt.
Set<Long> offsets = rejectRecordBatch(inFlightBatch, currentBatch);
inFlightBatch.setException(new ShareInFlightBatchException(cachedBatchException, offsets));
cachedBatchException = null;
return inFlightBatch;
}
if (cachedRecordException != null) {
inFlightBatch.addAcknowledgement(lastRecord.offset(), AcknowledgeType.RELEASE);
inFlightBatch.setException(new ShareInFlightBatchException(cachedRecordException, Set.of(lastRecord.offset())));
cachedRecordException = null;
return inFlightBatch;
}
if (isConsumed)
return inFlightBatch;
initializeNextAcquired();
try {
int recordsInBatch = 0;
boolean currentBatchHasMoreRecords = false;
while (recordsInBatch < maxRecords || currentBatchHasMoreRecords) {
currentBatchHasMoreRecords = nextFetchedRecord(checkCrcs);
if (lastRecord == null) {
// Any remaining acquired records are gaps
while (nextAcquired != null) {
inFlightBatch.addGap(nextAcquired.offset);
nextAcquired = nextAcquiredRecord();
}
break;
}
while (nextAcquired != null) {
if (lastRecord.offset() == nextAcquired.offset) {
// It's acquired, so we parse it and add it to the batch
Optional<Integer> leaderEpoch = maybeLeaderEpoch(currentBatch.partitionLeaderEpoch());
TimestampType timestampType = currentBatch.timestampType();
ConsumerRecord<K, V> record = parseRecord(deserializers, partition, leaderEpoch,
timestampType, lastRecord, nextAcquired.deliveryCount);
inFlightBatch.addRecord(record);
recordsRead++;
bytesRead += lastRecord.sizeInBytes();
recordsInBatch++;
nextAcquired = nextAcquiredRecord();
break;
} else if (lastRecord.offset() < nextAcquired.offset) {
// It's not acquired, so we skip it
break;
} else {
// It's acquired, but there's no non-control record at this offset, so it's a gap
inFlightBatch.addGap(nextAcquired.offset);
}
nextAcquired = nextAcquiredRecord();
}
}
} catch (SerializationException se) {
nextAcquired = nextAcquiredRecord();
if (inFlightBatch.isEmpty()) {
inFlightBatch.addAcknowledgement(lastRecord.offset(), AcknowledgeType.RELEASE);
inFlightBatch.setException(new ShareInFlightBatchException(se, Set.of(lastRecord.offset())));
} else {
cachedRecordException = se;
inFlightBatch.setHasCachedException(true);
}
} catch (CorruptRecordException e) {
if (inFlightBatch.isEmpty()) {
// If the event that a CRC check fails, reject the entire record batch because it is corrupt.
Set<Long> offsets = rejectRecordBatch(inFlightBatch, currentBatch);
inFlightBatch.setException(new ShareInFlightBatchException(e, offsets));
} else {
cachedBatchException = e;
inFlightBatch.setHasCachedException(true);
}
}
return inFlightBatch;
}
|
The {@link RecordBatch batch} of {@link Record records} is converted to a {@link List list} of
{@link ConsumerRecord consumer records} and returned. {@link BufferSupplier Decompression} and
{@link Deserializer deserialization} of the {@link Record record's} key and value are performed in
this step.
@param deserializers {@link Deserializer}s to use to convert the raw bytes to the expected key and value types
@param maxRecords The number of records to return; the number returned may be {@code 0 <= maxRecords}
@param checkCrcs Whether to check the CRC of fetched records
@return {@link ShareInFlightBatch The ShareInFlightBatch containing records and their acknowledgements}
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ShareCompletedFetch.java
| 175
|
[
"deserializers",
"maxRecords",
"checkCrcs"
] | true
| 15
| 7.68
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
writeStartObject
|
@Override
public void writeStartObject() throws IOException {
if (inRoot()) {
// Use the low level generator to write the startObject so that the root
// start object is always written even if a filtered generator is used
getLowLevelGenerator().writeStartObject();
return;
}
generator.writeStartObject();
}
|
Reference to filtering generator because
writing an empty object '{}' when everything is filtered
out needs a specific treatment
|
java
|
libs/x-content/impl/src/main/java/org/elasticsearch/xcontent/provider/json/JsonXContentGenerator.java
| 141
|
[] |
void
| true
| 2
| 6.4
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
getOptionsDiagnosticsOfConfigFile
|
function getOptionsDiagnosticsOfConfigFile() {
if (!options.configFile) return emptyArray;
let diagnostics = programDiagnostics.getCombinedDiagnostics(program).getDiagnostics(options.configFile.fileName);
forEachResolvedProjectReference(resolvedRef => {
diagnostics = concatenate(diagnostics, programDiagnostics.getCombinedDiagnostics(program).getDiagnostics(resolvedRef.sourceFile.fileName));
});
return diagnostics;
}
|
@returns The line index marked as preceding the diagnostic, or -1 if none was.
|
typescript
|
src/compiler/program.ts
| 3,269
|
[] | false
| 2
| 7.44
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
none
|
private <T, V> BiConsumer<T, BiConsumer<String, V>> none() {
return (item, pairs) -> {
};
}
|
Add pairs using nested naming (for example as used in ECS).
@param <T> the item type
@param pairs callback to add all the pairs
@return a {@link BiConsumer} for use with the {@link JsonWriter}
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/logging/structured/ContextPairs.java
| 84
|
[] | true
| 1
| 6.64
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
|
import_optional_dependency
|
def import_optional_dependency(
name: str,
extra: str = "",
min_version: str | None = None,
*,
errors: Literal["raise", "warn", "ignore"] = "raise",
) -> types.ModuleType | None:
"""
Import an optional dependency.
By default, if a dependency is missing an ImportError with a nice
message will be raised. If a dependency is present, but too old,
we raise.
Parameters
----------
name : str
The module name.
extra : str
Additional text to include in the ImportError message.
errors : str {'raise', 'warn', 'ignore'}
What to do when a dependency is not found or its version is too old.
* raise : Raise an ImportError
* warn : Only applicable when a module's version is to old.
Warns that the version is too old and returns None
* ignore: If the module is not installed, return None, otherwise,
return the module, even if the version is too old.
It's expected that users validate the version locally when
using ``errors="ignore"`` (see. ``io/html.py``)
min_version : str, default None
Specify a minimum version that is different from the global pandas
minimum version required.
Returns
-------
maybe_module : Optional[ModuleType]
The imported module, when found and the version is correct.
None is returned when the package is not found and `errors`
is False, or when the package's version is too old and `errors`
is ``'warn'`` or ``'ignore'``.
"""
assert errors in {"warn", "raise", "ignore"}
package_name = INSTALL_MAPPING.get(name)
install_name = package_name if package_name is not None else name
msg = (
f"`Import {install_name}` failed. {extra} "
f"Use pip or conda to install the {install_name} package."
)
try:
module = importlib.import_module(name)
except ImportError as err:
if errors == "raise":
raise ImportError(msg) from err
return None
# Handle submodules: if we have submodule, grab parent module from sys.modules
parent = name.split(".")[0]
if parent != name:
install_name = parent
module_to_get = sys.modules[install_name]
else:
module_to_get = module
minimum_version = min_version if min_version is not None else VERSIONS.get(parent)
if minimum_version:
version = get_version(module_to_get)
if version and Version(version) < Version(minimum_version):
msg = (
f"Pandas requires version '{minimum_version}' or newer of '{parent}' "
f"(version '{version}' currently installed)."
)
if errors == "warn":
warnings.warn(
msg,
UserWarning,
stacklevel=find_stack_level(),
)
return None
elif errors == "raise":
raise ImportError(msg)
else:
return None
return module
|
Import an optional dependency.
By default, if a dependency is missing an ImportError with a nice
message will be raised. If a dependency is present, but too old,
we raise.
Parameters
----------
name : str
The module name.
extra : str
Additional text to include in the ImportError message.
errors : str {'raise', 'warn', 'ignore'}
What to do when a dependency is not found or its version is too old.
* raise : Raise an ImportError
* warn : Only applicable when a module's version is to old.
Warns that the version is too old and returns None
* ignore: If the module is not installed, return None, otherwise,
return the module, even if the version is too old.
It's expected that users validate the version locally when
using ``errors="ignore"`` (see. ``io/html.py``)
min_version : str, default None
Specify a minimum version that is different from the global pandas
minimum version required.
Returns
-------
maybe_module : Optional[ModuleType]
The imported module, when found and the version is correct.
None is returned when the package is not found and `errors`
is False, or when the package's version is too old and `errors`
is ``'warn'`` or ``'ignore'``.
|
python
|
pandas/compat/_optional.py
| 107
|
[
"name",
"extra",
"min_version",
"errors"
] |
types.ModuleType | None
| true
| 12
| 6.8
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
__array__
|
def __array__(
self, dtype: NpDtype | None = None, copy: bool | None = None
) -> np.ndarray:
"""
The numpy array interface.
Users should not call this directly. Rather, it is invoked by
:func:`numpy.array` and :func:`numpy.asarray`.
Parameters
----------
dtype : np.dtype or None
Specifies the dtype for the array.
copy : bool or None, optional
See :func:`numpy.asarray`.
Returns
-------
numpy.array
A numpy array of either the specified dtype or,
if dtype==None (default), the same dtype as
categorical.categories.dtype.
See Also
--------
numpy.asarray : Convert input to numpy.ndarray.
Examples
--------
>>> cat = pd.Categorical(["a", "b"], ordered=True)
The following calls ``cat.__array__``
>>> np.asarray(cat)
array(['a', 'b'], dtype=object)
"""
if copy is False:
raise ValueError(
"Unable to avoid copy while creating an array as requested."
)
ret = take_nd(self.categories._values, self._codes)
# When we're a Categorical[ExtensionArray], like Interval,
# we need to ensure __array__ gets all the way to an
# ndarray.
# `take_nd` should already make a copy, so don't force again.
return np.asarray(ret, dtype=dtype)
|
The numpy array interface.
Users should not call this directly. Rather, it is invoked by
:func:`numpy.array` and :func:`numpy.asarray`.
Parameters
----------
dtype : np.dtype or None
Specifies the dtype for the array.
copy : bool or None, optional
See :func:`numpy.asarray`.
Returns
-------
numpy.array
A numpy array of either the specified dtype or,
if dtype==None (default), the same dtype as
categorical.categories.dtype.
See Also
--------
numpy.asarray : Convert input to numpy.ndarray.
Examples
--------
>>> cat = pd.Categorical(["a", "b"], ordered=True)
The following calls ``cat.__array__``
>>> np.asarray(cat)
array(['a', 'b'], dtype=object)
|
python
|
pandas/core/arrays/categorical.py
| 1,703
|
[
"self",
"dtype",
"copy"
] |
np.ndarray
| true
| 2
| 8.16
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
resolveArguments
|
AutowiredArguments resolveArguments(RegisteredBean registeredBean) {
Assert.notNull(registeredBean, "'registeredBean' must not be null");
return resolveArguments(registeredBean, this.lookup.get(registeredBean));
}
|
Resolve arguments for the specified registered bean.
@param registeredBean the registered bean
@return the resolved constructor or factory method arguments
|
java
|
spring-beans/src/main/java/org/springframework/beans/factory/aot/BeanInstanceSupplier.java
| 239
|
[
"registeredBean"
] |
AutowiredArguments
| true
| 1
| 6
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
should_cache
|
def should_cache(
arg: ArrayConvertible, unique_share: float = 0.7, check_count: int | None = None
) -> bool:
"""
Decides whether to do caching.
If the percent of unique elements among `check_count` elements less
than `unique_share * 100` then we can do caching.
Parameters
----------
arg: listlike, tuple, 1-d array, Series
unique_share: float, default=0.7, optional
0 < unique_share < 1
check_count: int, optional
0 <= check_count <= len(arg)
Returns
-------
do_caching: bool
Notes
-----
By default for a sequence of less than 50 items in size, we don't do
caching; for the number of elements less than 5000, we take ten percent of
all elements to check for a uniqueness share; if the sequence size is more
than 5000, then we check only the first 500 elements.
All constants were chosen empirically by.
"""
do_caching = True
# default realization
if check_count is None:
# in this case, the gain from caching is negligible
if len(arg) <= start_caching_at:
return False
if len(arg) <= 5000:
check_count = len(arg) // 10
else:
check_count = 500
else:
assert 0 <= check_count <= len(arg), (
"check_count must be in next bounds: [0; len(arg)]"
)
if check_count == 0:
return False
assert 0 < unique_share < 1, "unique_share must be in next bounds: (0; 1)"
try:
# We can't cache if the items are not hashable.
unique_elements = set(islice(arg, check_count))
except TypeError:
return False
if len(unique_elements) > check_count * unique_share:
do_caching = False
return do_caching
|
Decides whether to do caching.
If the percent of unique elements among `check_count` elements less
than `unique_share * 100` then we can do caching.
Parameters
----------
arg: listlike, tuple, 1-d array, Series
unique_share: float, default=0.7, optional
0 < unique_share < 1
check_count: int, optional
0 <= check_count <= len(arg)
Returns
-------
do_caching: bool
Notes
-----
By default for a sequence of less than 50 items in size, we don't do
caching; for the number of elements less than 5000, we take ten percent of
all elements to check for a uniqueness share; if the sequence size is more
than 5000, then we check only the first 500 elements.
All constants were chosen empirically by.
|
python
|
pandas/core/tools/datetimes.py
| 156
|
[
"arg",
"unique_share",
"check_count"
] |
bool
| true
| 8
| 6.88
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
asBindTarget
|
public Bindable<?> asBindTarget() {
return this.bindTarget;
}
|
Return a {@link Bindable} instance suitable that can be used as a target for the
{@link Binder}.
@return a bind target for use with the {@link Binder}
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/context/properties/ConfigurationPropertiesBean.java
| 125
|
[] | true
| 1
| 6.96
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
|
__init__
|
def __init__(
self,
input_nodes: list[Any],
scalars: Optional[dict[str, Union[float, int]]] = None,
out_dtype: Optional[torch.dtype] = None,
):
"""
Initialize with a tuple of input nodes.
Args:
input_nodes: A tuple of input nodes to store
out_dtype: Optional output dtype to store
"""
self._input_nodes = input_nodes
self._device_name: Optional[str] = None
self._scalars = scalars if scalars is not None else {}
self._out_dtype = out_dtype
assert len(input_nodes) > 0, "Expected at least one input node"
|
Initialize with a tuple of input nodes.
Args:
input_nodes: A tuple of input nodes to store
out_dtype: Optional output dtype to store
|
python
|
torch/_inductor/kernel_inputs.py
| 27
|
[
"self",
"input_nodes",
"scalars",
"out_dtype"
] | true
| 2
| 6.56
|
pytorch/pytorch
| 96,034
|
google
| false
|
|
print
|
public static String print(Duration value, DurationFormat.Style style, DurationFormat.@Nullable Unit unit) {
return switch (style) {
case ISO8601 -> value.toString();
case SIMPLE -> printSimple(value, unit);
case COMPOSITE -> printComposite(value);
};
}
|
Print the specified duration in the specified style using the given unit.
@param value the value to print
@param style the style to print in
@param unit the unit to use for printing, if relevant ({@code null} will default
to ms)
@return the printed result
|
java
|
spring-context/src/main/java/org/springframework/format/datetime/standard/DurationFormatterUtils.java
| 70
|
[
"value",
"style",
"unit"
] |
String
| true
| 1
| 7.04
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
__init__
|
def __init__(self, *, xcoms: dict[str, JsonValue] | None = None, **kwargs) -> None:
"""
Initialize the class with the specified parameters.
:param xcoms: A dictionary of XComs or None.
:param kwargs: Additional keyword arguments.
"""
if "payload" in kwargs:
raise ValueError("Param 'payload' not supported for this class.")
# Yes this is _odd_. It's to support both constructor from users of
# `TaskSuccessEvent(some_xcom_value)` and deserialization by pydantic.
state = kwargs.pop("task_instance_state", self.__pydantic_fields__["task_instance_state"].default)
super().__init__(payload=str(state), task_instance_state=state, **kwargs)
self.xcoms = xcoms
|
Initialize the class with the specified parameters.
:param xcoms: A dictionary of XComs or None.
:param kwargs: Additional keyword arguments.
|
python
|
airflow-core/src/airflow/triggers/base.py
| 184
|
[
"self",
"xcoms"
] |
None
| true
| 2
| 6.88
|
apache/airflow
| 43,597
|
sphinx
| false
|
visitImportCallExpression
|
function visitImportCallExpression(node: ImportCall, rewriteOrShim: boolean): Expression {
if (moduleKind === ModuleKind.None && languageVersion >= ScriptTarget.ES2020) {
return visitEachChild(node, visitor, context);
}
const externalModuleName = getExternalModuleNameLiteral(factory, node, currentSourceFile, host, resolver, compilerOptions);
const firstArgument = visitNode(firstOrUndefined(node.arguments), visitor, isExpression);
// Only use the external module name if it differs from the first argument. This allows us to preserve the quote style of the argument on output.
const argument = externalModuleName && (!firstArgument || !isStringLiteral(firstArgument) || firstArgument.text !== externalModuleName.text)
? externalModuleName
: firstArgument && rewriteOrShim
? isStringLiteral(firstArgument) ? rewriteModuleSpecifier(firstArgument, compilerOptions) : emitHelpers().createRewriteRelativeImportExtensionsHelper(firstArgument)
: firstArgument;
const containsLexicalThis = !!(node.transformFlags & TransformFlags.ContainsLexicalThis);
switch (compilerOptions.module) {
case ModuleKind.AMD:
return createImportCallExpressionAMD(argument, containsLexicalThis);
case ModuleKind.UMD:
return createImportCallExpressionUMD(argument ?? factory.createVoidZero(), containsLexicalThis);
case ModuleKind.CommonJS:
default:
return createImportCallExpressionCommonJS(argument);
}
}
|
Visits the body of a Block to hoist declarations.
@param node The node to visit.
|
typescript
|
src/compiler/transformers/module/module.ts
| 1,208
|
[
"node",
"rewriteOrShim"
] | true
| 10
| 6.72
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
from_breaks
|
def from_breaks(
cls,
breaks,
closed: IntervalClosedType | None = "right",
copy: bool = False,
dtype: Dtype | None = None,
) -> Self:
"""
Construct an IntervalArray from an array of splits.
Parameters
----------
breaks : array-like (1-dimensional)
Left and right bounds for each interval.
closed : {'left', 'right', 'both', 'neither'}, default 'right'
Whether the intervals are closed on the left-side, right-side, both
or neither.
copy : bool, default False
Copy the data.
dtype : dtype or None, default None
If None, dtype will be inferred.
Returns
-------
IntervalArray
See Also
--------
interval_range : Function to create a fixed frequency IntervalIndex.
IntervalArray.from_arrays : Construct from a left and right array.
IntervalArray.from_tuples : Construct from a sequence of tuples.
Examples
--------
>>> pd.arrays.IntervalArray.from_breaks([0, 1, 2, 3])
<IntervalArray>
[(0, 1], (1, 2], (2, 3]]
Length: 3, dtype: interval[int64, right]
"""
breaks = _maybe_convert_platform_interval(breaks)
return cls.from_arrays(breaks[:-1], breaks[1:], closed, copy=copy, dtype=dtype)
|
Construct an IntervalArray from an array of splits.
Parameters
----------
breaks : array-like (1-dimensional)
Left and right bounds for each interval.
closed : {'left', 'right', 'both', 'neither'}, default 'right'
Whether the intervals are closed on the left-side, right-side, both
or neither.
copy : bool, default False
Copy the data.
dtype : dtype or None, default None
If None, dtype will be inferred.
Returns
-------
IntervalArray
See Also
--------
interval_range : Function to create a fixed frequency IntervalIndex.
IntervalArray.from_arrays : Construct from a left and right array.
IntervalArray.from_tuples : Construct from a sequence of tuples.
Examples
--------
>>> pd.arrays.IntervalArray.from_breaks([0, 1, 2, 3])
<IntervalArray>
[(0, 1], (1, 2], (2, 3]]
Length: 3, dtype: interval[int64, right]
|
python
|
pandas/core/arrays/interval.py
| 486
|
[
"cls",
"breaks",
"closed",
"copy",
"dtype"
] |
Self
| true
| 1
| 6.8
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
fromregex
|
def fromregex(file, regexp, dtype, encoding=None):
r"""
Construct an array from a text file, using regular expression parsing.
The returned array is always a structured array, and is constructed from
all matches of the regular expression in the file. Groups in the regular
expression are converted to fields of the structured array.
Parameters
----------
file : file, str, or pathlib.Path
Filename or file object to read.
.. versionchanged:: 1.22.0
Now accepts `os.PathLike` implementations.
regexp : str or regexp
Regular expression used to parse the file.
Groups in the regular expression correspond to fields in the dtype.
dtype : dtype or list of dtypes
Dtype for the structured array; must be a structured datatype.
encoding : str, optional
Encoding used to decode the inputfile. Does not apply to input streams.
Returns
-------
output : ndarray
The output array, containing the part of the content of `file` that
was matched by `regexp`. `output` is always a structured array.
Raises
------
TypeError
When `dtype` is not a valid dtype for a structured array.
See Also
--------
fromstring, loadtxt
Notes
-----
Dtypes for structured arrays can be specified in several forms, but all
forms specify at least the data type and field name. For details see
`basics.rec`.
Examples
--------
>>> import numpy as np
>>> from io import StringIO
>>> text = StringIO("1312 foo\n1534 bar\n444 qux")
>>> regexp = r"(\d+)\s+(...)" # match [digits, whitespace, anything]
>>> output = np.fromregex(text, regexp,
... [('num', np.int64), ('key', 'S3')])
>>> output
array([(1312, b'foo'), (1534, b'bar'), ( 444, b'qux')],
dtype=[('num', '<i8'), ('key', 'S3')])
>>> output['num']
array([1312, 1534, 444])
"""
own_fh = False
if not hasattr(file, "read"):
file = os.fspath(file)
file = np.lib._datasource.open(file, 'rt', encoding=encoding)
own_fh = True
try:
if not isinstance(dtype, np.dtype):
dtype = np.dtype(dtype)
if dtype.names is None:
raise TypeError('dtype must be a structured datatype.')
content = file.read()
if isinstance(content, bytes) and isinstance(regexp, str):
regexp = asbytes(regexp)
if not hasattr(regexp, 'match'):
regexp = re.compile(regexp)
seq = regexp.findall(content)
if seq and not isinstance(seq[0], tuple):
# Only one group is in the regexp.
# Create the new array as a single data-type and then
# re-interpret as a single-field structured array.
newdtype = np.dtype(dtype[dtype.names[0]])
output = np.array(seq, dtype=newdtype)
output = output.view(dtype)
else:
output = np.array(seq, dtype=dtype)
return output
finally:
if own_fh:
file.close()
|
r"""
Construct an array from a text file, using regular expression parsing.
The returned array is always a structured array, and is constructed from
all matches of the regular expression in the file. Groups in the regular
expression are converted to fields of the structured array.
Parameters
----------
file : file, str, or pathlib.Path
Filename or file object to read.
.. versionchanged:: 1.22.0
Now accepts `os.PathLike` implementations.
regexp : str or regexp
Regular expression used to parse the file.
Groups in the regular expression correspond to fields in the dtype.
dtype : dtype or list of dtypes
Dtype for the structured array; must be a structured datatype.
encoding : str, optional
Encoding used to decode the inputfile. Does not apply to input streams.
Returns
-------
output : ndarray
The output array, containing the part of the content of `file` that
was matched by `regexp`. `output` is always a structured array.
Raises
------
TypeError
When `dtype` is not a valid dtype for a structured array.
See Also
--------
fromstring, loadtxt
Notes
-----
Dtypes for structured arrays can be specified in several forms, but all
forms specify at least the data type and field name. For details see
`basics.rec`.
Examples
--------
>>> import numpy as np
>>> from io import StringIO
>>> text = StringIO("1312 foo\n1534 bar\n444 qux")
>>> regexp = r"(\d+)\s+(...)" # match [digits, whitespace, anything]
>>> output = np.fromregex(text, regexp,
... [('num', np.int64), ('key', 'S3')])
>>> output
array([(1312, b'foo'), (1534, b'bar'), ( 444, b'qux')],
dtype=[('num', '<i8'), ('key', 'S3')])
>>> output['num']
array([1312, 1534, 444])
|
python
|
numpy/lib/_npyio_impl.py
| 1,631
|
[
"file",
"regexp",
"dtype",
"encoding"
] | false
| 11
| 7.76
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
_construct_strides
|
def _construct_strides(
sizes: Sequence[int],
fill_order: Sequence[int],
) -> Sequence[int]:
"""From a list of sizes and a fill order, construct the strides of the permuted tensor."""
# Initialize strides
assert len(sizes) == len(fill_order), (
"Length of sizes must match the length of the fill order"
)
strides = [0] * len(sizes)
# Start with stride 1 for the innermost dimension
current_stride = 1
# Iterate through the fill order populating strides
for dim in fill_order:
strides[dim] = current_stride
current_stride *= sizes[dim]
return strides
|
From a list of sizes and a fill order, construct the strides of the permuted tensor.
|
python
|
torch/_higher_order_ops/flex_attention.py
| 35
|
[
"sizes",
"fill_order"
] |
Sequence[int]
| true
| 2
| 6
|
pytorch/pytorch
| 96,034
|
unknown
| false
|
toOffsetDateTime
|
public static OffsetDateTime toOffsetDateTime(final Calendar calendar) {
return OffsetDateTime.ofInstant(calendar.toInstant(), toZoneId(calendar));
}
|
Converts a Calendar to a OffsetDateTime.
@param calendar the Calendar to convert.
@return a OffsetDateTime.
@since 3.17.0
|
java
|
src/main/java/org/apache/commons/lang3/time/CalendarUtils.java
| 85
|
[
"calendar"
] |
OffsetDateTime
| true
| 1
| 6.32
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
computeIndentation
|
function computeIndentation(
node: TextRangeWithKind,
startLine: number,
inheritedIndentation: number,
parent: Node,
parentDynamicIndentation: DynamicIndentation,
effectiveParentStartLine: number,
): { indentation: number; delta: number; } {
const delta = SmartIndenter.shouldIndentChildNode(options, node) ? options.indentSize! : 0;
if (effectiveParentStartLine === startLine) {
// if node is located on the same line with the parent
// - inherit indentation from the parent
// - push children if either parent of node itself has non-zero delta
return {
indentation: startLine === lastIndentedLine ? indentationOnLastIndentedLine : parentDynamicIndentation.getIndentation(),
delta: Math.min(options.indentSize!, parentDynamicIndentation.getDelta(node) + delta),
};
}
else if (inheritedIndentation === Constants.Unknown) {
if (node.kind === SyntaxKind.OpenParenToken && startLine === lastIndentedLine) {
// the is used for chaining methods formatting
// - we need to get the indentation on last line and the delta of parent
return { indentation: indentationOnLastIndentedLine, delta: parentDynamicIndentation.getDelta(node) };
}
else if (
SmartIndenter.childStartsOnTheSameLineWithElseInIfStatement(parent, node, startLine, sourceFile) ||
SmartIndenter.childIsUnindentedBranchOfConditionalExpression(parent, node, startLine, sourceFile) ||
SmartIndenter.argumentStartsOnSameLineAsPreviousArgument(parent, node, startLine, sourceFile)
) {
return { indentation: parentDynamicIndentation.getIndentation(), delta };
}
else {
return { indentation: parentDynamicIndentation.getIndentation() + parentDynamicIndentation.getDelta(node), delta };
}
}
else {
return { indentation: inheritedIndentation, delta };
}
}
|
Tries to compute the indentation for a list element.
If list element is not in range then
function will pick its actual indentation
so it can be pushed downstream as inherited indentation.
If list element is in the range - its indentation will be equal
to inherited indentation from its predecessors.
|
typescript
|
src/services/formatting/formatting.ts
| 616
|
[
"node",
"startLine",
"inheritedIndentation",
"parent",
"parentDynamicIndentation",
"effectiveParentStartLine"
] | true
| 14
| 6
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
compareConstructorFit
|
static int compareConstructorFit(final Constructor<?> left, final Constructor<?> right, final Class<?>[] actual) {
return compareParameterTypes(Executable.of(left), Executable.of(right), actual);
}
|
Compares the relative fitness of two Constructors in terms of how well they match a set of runtime parameter types, such that a list ordered by the
results of the comparison would return the best match first (least).
@param left the "left" Constructor.
@param right the "right" Constructor.
@param actual the runtime parameter types to match against. {@code left}/{@code right}.
@return int consistent with {@code compare} semantics.
|
java
|
src/main/java/org/apache/commons/lang3/reflect/MemberUtils.java
| 96
|
[
"left",
"right",
"actual"
] | true
| 1
| 6.8
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
topicNameValues
|
public Map<String, KafkaFuture<TopicDescription>> topicNameValues() {
return nameFutures;
}
|
Use when {@link Admin#describeTopics(TopicCollection, DescribeTopicsOptions)} used a TopicNameCollection
@return a map from topic names to futures which can be used to check the status of
individual topics if the request used topic names, otherwise return null.
|
java
|
clients/src/main/java/org/apache/kafka/clients/admin/DescribeTopicsResult.java
| 70
|
[] | true
| 1
| 6
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
initialize_flask_plugins
|
def initialize_flask_plugins():
"""Collect flask extension points for WEB UI (legacy)."""
global flask_blueprints
global flask_appbuilder_views
global flask_appbuilder_menu_links
if (
flask_blueprints is not None
and flask_appbuilder_views is not None
and flask_appbuilder_menu_links is not None
):
return
ensure_plugins_loaded()
if plugins is None:
raise AirflowPluginException("Can't load plugins.")
log.debug("Initialize legacy Web UI plugin")
flask_blueprints = []
flask_appbuilder_views = []
flask_appbuilder_menu_links = []
for plugin in plugins:
flask_appbuilder_views.extend(plugin.appbuilder_views)
flask_appbuilder_menu_links.extend(plugin.appbuilder_menu_items)
flask_blueprints.extend([{"name": plugin.name, "blueprint": bp} for bp in plugin.flask_blueprints])
if (plugin.admin_views and not plugin.appbuilder_views) or (
plugin.menu_links and not plugin.appbuilder_menu_items
):
log.warning(
"Plugin '%s' may not be compatible with the current Airflow version. "
"Please contact the author of the plugin.",
plugin.name,
)
|
Collect flask extension points for WEB UI (legacy).
|
python
|
airflow-core/src/airflow/plugins_manager.py
| 448
|
[] | false
| 10
| 6.24
|
apache/airflow
| 43,597
|
unknown
| false
|
|
delete
|
def delete(
self, loc: int | np.integer | list[int] | npt.NDArray[np.integer]
) -> Self:
"""
Make new Index with passed location(-s) deleted.
Parameters
----------
loc : int or list of int
Location of item(-s) which will be deleted.
Use a list of locations to delete more than one value at the same time.
Returns
-------
Index
Will be same type as self, except for RangeIndex.
See Also
--------
numpy.delete : Delete any rows and column from NumPy array (ndarray).
Examples
--------
>>> idx = pd.Index(["a", "b", "c"])
>>> idx.delete(1)
Index(['a', 'c'], dtype='str')
>>> idx = pd.Index(["a", "b", "c"])
>>> idx.delete([0, 2])
Index(['b'], dtype='str')
"""
values = self._values
res_values: ArrayLike
if isinstance(values, np.ndarray):
# TODO(__array_function__): special casing will be unnecessary
res_values = np.delete(values, loc)
else:
res_values = values.delete(loc)
# _constructor so RangeIndex-> Index with an int64 dtype
return self._constructor._simple_new(res_values, name=self.name)
|
Make new Index with passed location(-s) deleted.
Parameters
----------
loc : int or list of int
Location of item(-s) which will be deleted.
Use a list of locations to delete more than one value at the same time.
Returns
-------
Index
Will be same type as self, except for RangeIndex.
See Also
--------
numpy.delete : Delete any rows and column from NumPy array (ndarray).
Examples
--------
>>> idx = pd.Index(["a", "b", "c"])
>>> idx.delete(1)
Index(['a', 'c'], dtype='str')
>>> idx = pd.Index(["a", "b", "c"])
>>> idx.delete([0, 2])
Index(['b'], dtype='str')
|
python
|
pandas/core/indexes/base.py
| 7,009
|
[
"self",
"loc"
] |
Self
| true
| 3
| 8.64
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
_json_to_gemm_operation
|
def _json_to_gemm_operation(cls, json_dict: dict[str, Any]) -> "GemmOperation": # type: ignore[name-defined] # noqa: F821
"""Convert JSON dict to GemmOperation object.
Args:
json_dict: Dictionary representation
Returns:
GemmOperation: Reconstructed object
"""
from cutlass_library import DataType
from cutlass_library.gemm_operation import GemmKind, GemmOperation
from cutlass_library.library import (
EpilogueFunctor,
EpilogueFunctor3x,
EpilogueScheduleType,
KernelScheduleType,
MixedInputMode,
SwizzlingFunctor,
TileSchedulerType,
)
# Extract constructor parameters from the JSON dictionary
gemm_kind = cls._json_to_enum(json_dict["gemm_kind"], GemmKind)
arch = json_dict["arch"]
tile_description = cls._json_to_tile_description(json_dict["tile_description"])
A = cls._json_to_tensor_description(json_dict.get("A"), "A")
B = cls._json_to_tensor_description(json_dict.get("B"), "B")
C = cls._json_to_tensor_description(json_dict.get("C"), "C")
element_epilogue = cls._json_to_enum(json_dict["element_epilogue"], DataType)
# Get optional parameters with defaults
epilogue_functor = cls._json_to_enum(
json_dict.get("epilogue_functor"),
EpilogueFunctor3x if json_dict.get("is_3x") else EpilogueFunctor,
)
swizzling_functor = cls._json_to_enum(
json_dict.get("swizzling_functor"), SwizzlingFunctor
)
D = cls._json_to_tensor_description(json_dict.get("D"), "D")
kernel_schedule = cls._json_to_enum(
json_dict.get("kernel_schedule"), KernelScheduleType
)
epilogue_schedule = cls._json_to_enum(
json_dict.get("epilogue_schedule"), EpilogueScheduleType
)
tile_scheduler = cls._json_to_enum(
json_dict.get("tile_scheduler"), TileSchedulerType
)
mixed_input_mode = cls._json_to_enum(
json_dict.get("mixed_input_mode"), MixedInputMode
)
mixed_input_shuffle = json_dict.get("mixed_input_shuffle", False)
# Scale factors
ScaleFactorA = cls._json_to_enum(json_dict.get("ScaleFactorA"), DataType)
ScaleFactorB = cls._json_to_enum(json_dict.get("ScaleFactorB"), DataType)
ScaleFactorD = None
if "ScaleFactorD" in json_dict and "ScaleFactorVectorSize" in json_dict:
ScaleFactorD = {
"tensor": cls._json_to_tensor_description(
json_dict.get("ScaleFactorD"), "ScaleFactorD"
),
"vector_size": json_dict.get("ScaleFactorVectorSize"),
}
ScaleFactorMVecSize = json_dict.get("ScaleFactorMVecSize")
ScaleFactorNVecSize = json_dict.get("ScaleFactorNVecSize")
ScaleFactorKVecSize = json_dict.get("ScaleFactorKVecSize")
# Create the GemmOperation with the extracted parameters
operation = GemmOperation(
gemm_kind=gemm_kind,
arch=arch,
tile_description=tile_description,
A=A,
B=B,
C=C,
element_epilogue=element_epilogue,
epilogue_functor=epilogue_functor,
swizzling_functor=swizzling_functor,
D=D,
kernel_schedule=kernel_schedule,
epilogue_schedule=epilogue_schedule,
tile_scheduler=tile_scheduler,
mixed_input_mode=mixed_input_mode,
mixed_input_shuffle=mixed_input_shuffle,
ScaleFactorA=ScaleFactorA,
ScaleFactorB=ScaleFactorB,
ScaleFactorD=ScaleFactorD,
ScaleFactorMVecSize=ScaleFactorMVecSize,
ScaleFactorNVecSize=ScaleFactorNVecSize,
ScaleFactorKVecSize=ScaleFactorKVecSize,
)
return operation
|
Convert JSON dict to GemmOperation object.
Args:
json_dict: Dictionary representation
Returns:
GemmOperation: Reconstructed object
|
python
|
torch/_inductor/codegen/cuda/serialization.py
| 122
|
[
"cls",
"json_dict"
] |
"GemmOperation"
| true
| 4
| 7.2
|
pytorch/pytorch
| 96,034
|
google
| false
|
of
|
public static CorrelationIdFormatter of(String @Nullable [] spec) {
return of((spec != null) ? List.of(spec) : Collections.emptyList());
}
|
Create a new {@link CorrelationIdFormatter} instance from the given specification.
@param spec a pre-separated specification
@return a new {@link CorrelationIdFormatter} instance
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/logging/CorrelationIdFormatter.java
| 147
|
[
"spec"
] |
CorrelationIdFormatter
| true
| 2
| 7.04
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
parse
|
@Override
boolean parse(final FastDateParser parser, final Calendar calendar, final String source, final ParsePosition pos, final int maxWidth) {
int idx = pos.getIndex();
int last = source.length();
if (maxWidth == 0) {
// if no maxWidth, strip leading white space
for (; idx < last; ++idx) {
final char c = source.charAt(idx);
if (!Character.isWhitespace(c)) {
break;
}
}
pos.setIndex(idx);
} else {
final int end = idx + maxWidth;
if (last > end) {
last = end;
}
}
for (; idx < last; ++idx) {
final char c = source.charAt(idx);
if (!Character.isDigit(c)) {
break;
}
}
if (pos.getIndex() == idx) {
pos.setErrorIndex(idx);
return false;
}
final int value = Integer.parseInt(source.substring(pos.getIndex(), idx));
pos.setIndex(idx);
calendar.set(field, modify(parser, value));
return true;
}
|
Make any modifications to parsed integer
@param parser The parser
@param iValue The parsed integer
@return The modified value
|
java
|
src/main/java/org/apache/commons/lang3/time/FastDateParser.java
| 277
|
[
"parser",
"calendar",
"source",
"pos",
"maxWidth"
] | true
| 8
| 7.28
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
copy_object
|
def copy_object(
self,
source_bucket_key: str,
dest_bucket_key: str,
source_bucket_name: str | None = None,
dest_bucket_name: str | None = None,
source_version_id: str | None = None,
acl_policy: str | None = None,
meta_data_directive: str | None = None,
**kwargs,
) -> None:
"""
Create a copy of an object that is already stored in S3.
.. seealso::
- :external+boto3:py:meth:`S3.Client.copy_object`
Note: the S3 connection used here needs to have access to both
source and destination bucket/key.
:param source_bucket_key: The key of the source object.
It can be either full s3:// style url or relative path from root level.
When it's specified as a full s3:// url, please omit source_bucket_name.
:param dest_bucket_key: The key of the object to copy to.
The convention to specify `dest_bucket_key` is the same
as `source_bucket_key`.
:param source_bucket_name: Name of the S3 bucket where the source object is in.
It should be omitted when `source_bucket_key` is provided as a full s3:// url.
:param dest_bucket_name: Name of the S3 bucket to where the object is copied.
It should be omitted when `dest_bucket_key` is provided as a full s3:// url.
:param source_version_id: Version ID of the source object (OPTIONAL)
:param acl_policy: The string to specify the canned ACL policy for the
object to be copied which is private by default.
:param meta_data_directive: Whether to `COPY` the metadata from the source object or `REPLACE` it
with metadata that's provided in the request.
"""
acl_policy = acl_policy or "private"
if acl_policy != NO_ACL:
kwargs["ACL"] = acl_policy
if meta_data_directive:
kwargs["MetadataDirective"] = meta_data_directive
if self._requester_pays:
kwargs["RequestPayer"] = "requester"
dest_bucket_name, dest_bucket_key = self.get_s3_bucket_key(
dest_bucket_name, dest_bucket_key, "dest_bucket_name", "dest_bucket_key"
)
source_bucket_name, source_bucket_key = self.get_s3_bucket_key(
source_bucket_name,
source_bucket_key,
"source_bucket_name",
"source_bucket_key",
)
copy_source = {
"Bucket": source_bucket_name,
"Key": source_bucket_key,
"VersionId": source_version_id,
}
response = self.get_conn().copy_object(
Bucket=dest_bucket_name,
Key=dest_bucket_key,
CopySource=copy_source,
**kwargs,
)
get_hook_lineage_collector().add_input_asset(
context=self,
scheme="s3",
asset_kwargs={"bucket": source_bucket_name, "key": source_bucket_key},
)
get_hook_lineage_collector().add_output_asset(
context=self,
scheme="s3",
asset_kwargs={"bucket": dest_bucket_name, "key": dest_bucket_key},
)
return response
|
Create a copy of an object that is already stored in S3.
.. seealso::
- :external+boto3:py:meth:`S3.Client.copy_object`
Note: the S3 connection used here needs to have access to both
source and destination bucket/key.
:param source_bucket_key: The key of the source object.
It can be either full s3:// style url or relative path from root level.
When it's specified as a full s3:// url, please omit source_bucket_name.
:param dest_bucket_key: The key of the object to copy to.
The convention to specify `dest_bucket_key` is the same
as `source_bucket_key`.
:param source_bucket_name: Name of the S3 bucket where the source object is in.
It should be omitted when `source_bucket_key` is provided as a full s3:// url.
:param dest_bucket_name: Name of the S3 bucket to where the object is copied.
It should be omitted when `dest_bucket_key` is provided as a full s3:// url.
:param source_version_id: Version ID of the source object (OPTIONAL)
:param acl_policy: The string to specify the canned ACL policy for the
object to be copied which is private by default.
:param meta_data_directive: Whether to `COPY` the metadata from the source object or `REPLACE` it
with metadata that's provided in the request.
|
python
|
providers/amazon/src/airflow/providers/amazon/aws/hooks/s3.py
| 1,379
|
[
"self",
"source_bucket_key",
"dest_bucket_key",
"source_bucket_name",
"dest_bucket_name",
"source_version_id",
"acl_policy",
"meta_data_directive"
] |
None
| true
| 5
| 6.8
|
apache/airflow
| 43,597
|
sphinx
| false
|
withBindMethod
|
public Bindable<T> withBindMethod(@Nullable BindMethod bindMethod) {
Assert.state(bindMethod != BindMethod.VALUE_OBJECT || this.value == null,
() -> "Value object binding cannot be used with an existing or supplied value");
return new Bindable<>(this.type, this.boxedType, this.value, this.annotations, this.bindRestrictions,
bindMethod);
}
|
Create an updated {@link Bindable} instance with a specific bind method. To use
{@link BindMethod#VALUE_OBJECT value object binding}, the current instance must not
have an existing or supplied value.
@param bindMethod the method to use to bind the bindable
@return an updated {@link Bindable}
@since 3.0.8
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/context/properties/bind/Bindable.java
| 240
|
[
"bindMethod"
] | true
| 2
| 7.92
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
|
execute
|
def execute(self, context: Context) -> str:
"""
Execute AWS Glue Crawler from Airflow.
:return: the name of the current glue crawler.
"""
crawler_name = self.config["Name"]
if self.hook.has_crawler(crawler_name):
self.hook.update_crawler(**self.config)
else:
self.hook.create_crawler(**self.config)
self.log.info("Triggering AWS Glue Crawler")
self.hook.start_crawler(crawler_name)
if self.deferrable:
self.defer(
trigger=GlueCrawlerCompleteTrigger(
crawler_name=crawler_name,
waiter_delay=self.poll_interval,
aws_conn_id=self.aws_conn_id,
region_name=self.region_name,
verify=self.verify,
botocore_config=self.botocore_config,
),
method_name="execute_complete",
)
elif self.wait_for_completion:
self.log.info("Waiting for AWS Glue Crawler")
self.hook.wait_for_crawler_completion(crawler_name=crawler_name, poll_interval=self.poll_interval)
return crawler_name
|
Execute AWS Glue Crawler from Airflow.
:return: the name of the current glue crawler.
|
python
|
providers/amazon/src/airflow/providers/amazon/aws/operators/glue_crawler.py
| 87
|
[
"self",
"context"
] |
str
| true
| 5
| 6.88
|
apache/airflow
| 43,597
|
unknown
| false
|
get_na_values
|
def get_na_values(col, na_values, na_fvalues, keep_default_na: bool):
"""
Get the NaN values for a given column.
Parameters
----------
col : str
The name of the column.
na_values : array-like, dict
The object listing the NaN values as strings.
na_fvalues : array-like, dict
The object listing the NaN values as floats.
keep_default_na : bool
If `na_values` is a dict, and the column is not mapped in the
dictionary, whether to return the default NaN values or the empty set.
Returns
-------
nan_tuple : A length-two tuple composed of
1) na_values : the string NaN values for that column.
2) na_fvalues : the float NaN values for that column.
"""
if isinstance(na_values, dict):
if col in na_values:
return na_values[col], na_fvalues[col]
else:
if keep_default_na:
return STR_NA_VALUES, set()
return set(), set()
else:
return na_values, na_fvalues
|
Get the NaN values for a given column.
Parameters
----------
col : str
The name of the column.
na_values : array-like, dict
The object listing the NaN values as strings.
na_fvalues : array-like, dict
The object listing the NaN values as floats.
keep_default_na : bool
If `na_values` is a dict, and the column is not mapped in the
dictionary, whether to return the default NaN values or the empty set.
Returns
-------
nan_tuple : A length-two tuple composed of
1) na_values : the string NaN values for that column.
2) na_fvalues : the float NaN values for that column.
|
python
|
pandas/io/parsers/base_parser.py
| 836
|
[
"col",
"na_values",
"na_fvalues",
"keep_default_na"
] | true
| 6
| 6.88
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
|
onSuccess
|
default @Nullable Object onSuccess(ConfigurationPropertyName name, Bindable<?> target, BindContext context,
Object result) {
return result;
}
|
Called when binding of an element ends with a successful result. Implementations
may change the ultimately returned result or perform addition validation.
@param name the name of the element being bound
@param target the item being bound
@param context the bind context
@param result the bound result (never {@code null})
@return the actual result that should be used (may be {@code null})
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/context/properties/bind/BindHandler.java
| 61
|
[
"name",
"target",
"context",
"result"
] |
Object
| true
| 1
| 6.8
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
getStackTrace
|
public static String getStackTrace(final Throwable throwable) {
if (throwable == null) {
return StringUtils.EMPTY;
}
final StringWriter sw = new StringWriter();
throwable.printStackTrace(new PrintWriter(sw, true));
return sw.toString();
}
|
Gets the stack trace from a Throwable as a String, including suppressed and cause exceptions.
<p>
The result of this method vary by JDK version as this method
uses {@link Throwable#printStackTrace(java.io.PrintWriter)}.
</p>
@param throwable the {@link Throwable} to be examined, may be null.
@return the stack trace as generated by the exception's
{@code printStackTrace(PrintWriter)} method, or an empty String if {@code null} input.
|
java
|
src/main/java/org/apache/commons/lang3/exception/ExceptionUtils.java
| 465
|
[
"throwable"
] |
String
| true
| 2
| 7.92
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
stat
|
function stat(path, options = { bigint: false }, callback) {
if (typeof options === 'function') {
callback = options;
options = kEmptyObject;
}
callback = makeStatsCallback(callback);
const req = new FSReqCallback(options.bigint);
req.oncomplete = callback;
binding.stat(getValidatedPath(path), options.bigint, req);
}
|
Asynchronously gets the stats of a file.
@param {string | Buffer | URL} path
@param {{ bigint?: boolean; }} [options]
@param {(
err?: Error,
stats?: Stats
) => any} callback
@returns {void}
|
javascript
|
lib/fs.js
| 1,617
|
[
"path",
"callback"
] | false
| 2
| 6.08
|
nodejs/node
| 114,839
|
jsdoc
| false
|
|
get_db_cluster_snapshot_state
|
def get_db_cluster_snapshot_state(self, snapshot_id: str) -> str:
"""
Get the current state of a DB cluster snapshot.
.. seealso::
- :external+boto3:py:meth:`RDS.Client.describe_db_cluster_snapshots`
:param snapshot_id: The ID of the target DB cluster.
:return: Returns the status of the DB cluster snapshot as a string (eg. "available")
:raises AirflowNotFoundException: If the DB cluster snapshot does not exist.
"""
try:
response = self.conn.describe_db_cluster_snapshots(DBClusterSnapshotIdentifier=snapshot_id)
except self.conn.exceptions.DBClusterSnapshotNotFoundFault as e:
raise AirflowNotFoundException(e)
return response["DBClusterSnapshots"][0]["Status"].lower()
|
Get the current state of a DB cluster snapshot.
.. seealso::
- :external+boto3:py:meth:`RDS.Client.describe_db_cluster_snapshots`
:param snapshot_id: The ID of the target DB cluster.
:return: Returns the status of the DB cluster snapshot as a string (eg. "available")
:raises AirflowNotFoundException: If the DB cluster snapshot does not exist.
|
python
|
providers/amazon/src/airflow/providers/amazon/aws/hooks/rds.py
| 99
|
[
"self",
"snapshot_id"
] |
str
| true
| 1
| 6.4
|
apache/airflow
| 43,597
|
sphinx
| false
|
handleError
|
protected void handleError(Throwable ex, Method method, @Nullable Object... params) throws Exception {
if (Future.class.isAssignableFrom(method.getReturnType())) {
ReflectionUtils.rethrowException(ex);
}
else {
// Could not transmit the exception to the caller with default executor
try {
this.exceptionHandler.obtain().handleUncaughtException(ex, method, params);
}
catch (Throwable ex2) {
logger.warn("Exception handler for async method '" + method.toGenericString() +
"' threw unexpected exception itself", ex2);
}
}
}
|
Handles a fatal error thrown while asynchronously invoking the specified
{@link Method}.
<p>If the return type of the method is a {@link Future} object, the original
exception can be propagated by just throwing it at the higher level. However,
for all other cases, the exception will not be transmitted back to the client.
In that later case, the current {@link AsyncUncaughtExceptionHandler} will be
used to manage such exception.
@param ex the exception to handle
@param method the method that was invoked
@param params the parameters used to invoke the method
|
java
|
spring-aop/src/main/java/org/springframework/aop/interceptor/AsyncExecutionAspectSupport.java
| 307
|
[
"ex",
"method"
] |
void
| true
| 3
| 6.88
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
finalizeSplitBatches
|
private void finalizeSplitBatches(Deque<ProducerBatch> batches) {
// Chain all split batch ProduceRequestResults to the original batch's produceFuture
// Ensures the original batch's future doesn't complete until all split batches complete
for (ProducerBatch splitBatch : batches) {
produceFuture.addDependent(splitBatch.produceFuture);
}
produceFuture.set(ProduceResponse.INVALID_OFFSET, NO_TIMESTAMP, index -> new RecordBatchTooLargeException());
produceFuture.done();
assignProducerStateToBatches(batches);
}
|
Finalize the state of a batch. Final state, once set, is immutable. This function may be called
once or twice on a batch. It may be called twice if
1. An inflight batch expires before a response from the broker is received. The batch's final
state is set to FAILED. But it could succeed on the broker and second time around batch.done() may
try to set SUCCEEDED final state.
2. If a transaction abortion happens or if the producer is closed forcefully, the final state is
ABORTED but again it could succeed if broker responds with a success.
Attempted transitions from [FAILED | ABORTED] --> SUCCEEDED are logged.
Attempted transitions from one failure state to the same or a different failed state are ignored.
Attempted transitions from SUCCEEDED to the same or a failed state throw an exception.
@param baseOffset The base offset of the messages assigned by the server
@param logAppendTime The log append time or -1 if CreateTime is being used
@param topLevelException The exception that occurred (or null if the request was successful)
@param recordExceptions Record exception function mapping batchIndex to the respective record exception
@return true if the batch was completed successfully and false if the batch was previously aborted
|
java
|
clients/src/main/java/org/apache/kafka/clients/producer/internals/ProducerBatch.java
| 379
|
[
"batches"
] |
void
| true
| 1
| 7.04
|
apache/kafka
| 31,560
|
javadoc
| false
|
electLeaders
|
default ElectLeadersResult electLeaders(ElectionType electionType, Set<TopicPartition> partitions) {
return electLeaders(electionType, partitions, new ElectLeadersOptions());
}
|
Elect a replica as leader for topic partitions.
<p>
This is a convenience method for {@link #electLeaders(ElectionType, Set, ElectLeadersOptions)}
with default options.
@param electionType The type of election to conduct.
@param partitions The topics and partitions for which to conduct elections.
@return The ElectLeadersResult.
|
java
|
clients/src/main/java/org/apache/kafka/clients/admin/Admin.java
| 1,092
|
[
"electionType",
"partitions"
] |
ElectLeadersResult
| true
| 1
| 6.32
|
apache/kafka
| 31,560
|
javadoc
| false
|
contains
|
public boolean contains(final char ch) {
synchronized (set) {
return set.stream().anyMatch(range -> range.contains(ch));
}
}
|
Does the {@link CharSet} contain the specified
character {@code ch}.
@param ch the character to check for
@return {@code true} if the set contains the characters
|
java
|
src/main/java/org/apache/commons/lang3/CharSet.java
| 220
|
[
"ch"
] | true
| 1
| 6.72
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
paired_euclidean_distances
|
def paired_euclidean_distances(X, Y):
"""Compute the paired euclidean distances between X and Y.
Read more in the :ref:`User Guide <metrics>`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input array/matrix X.
Y : {array-like, sparse matrix} of shape (n_samples, n_features)
Input array/matrix Y.
Returns
-------
distances : ndarray of shape (n_samples,)
Output array/matrix containing the calculated paired euclidean
distances.
Examples
--------
>>> from sklearn.metrics.pairwise import paired_euclidean_distances
>>> X = [[0, 0, 0], [1, 1, 1]]
>>> Y = [[1, 0, 0], [1, 1, 0]]
>>> paired_euclidean_distances(X, Y)
array([1., 1.])
"""
X, Y = check_paired_arrays(X, Y)
return row_norms(X - Y)
|
Compute the paired euclidean distances between X and Y.
Read more in the :ref:`User Guide <metrics>`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input array/matrix X.
Y : {array-like, sparse matrix} of shape (n_samples, n_features)
Input array/matrix Y.
Returns
-------
distances : ndarray of shape (n_samples,)
Output array/matrix containing the calculated paired euclidean
distances.
Examples
--------
>>> from sklearn.metrics.pairwise import paired_euclidean_distances
>>> X = [[0, 0, 0], [1, 1, 1]]
>>> Y = [[1, 0, 0], [1, 1, 0]]
>>> paired_euclidean_distances(X, Y)
array([1., 1.])
|
python
|
sklearn/metrics/pairwise.py
| 1,187
|
[
"X",
"Y"
] | false
| 1
| 6
|
scikit-learn/scikit-learn
| 64,340
|
numpy
| false
|
|
parameterize
|
public static final ParameterizedType parameterize(final Class<?> rawClass, final Map<TypeVariable<?>, Type> typeVariableMap) {
Objects.requireNonNull(rawClass, "rawClass");
Objects.requireNonNull(typeVariableMap, "typeVariableMap");
return parameterizeWithOwner(null, rawClass, extractTypeArgumentsFrom(typeVariableMap, rawClass.getTypeParameters()));
}
|
Creates a parameterized type instance.
@param rawClass the raw class to create a parameterized type instance for.
@param typeVariableMap the map used for parameterization.
@return {@link ParameterizedType}.
@throws NullPointerException if either {@code rawClass} or {@code typeVariableMap} is {@code null}.
@since 3.2
|
java
|
src/main/java/org/apache/commons/lang3/reflect/TypeUtils.java
| 1,388
|
[
"rawClass",
"typeVariableMap"
] |
ParameterizedType
| true
| 1
| 6.4
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
estimatedSizeInBytes
|
public int estimatedSizeInBytes() {
return builtRecords != null ? builtRecords.sizeInBytes() : estimatedBytesWritten();
}
|
Get an estimate of the number of bytes written to the underlying buffer. The returned value
is exactly correct if the record set is not compressed or if the builder has been closed.
|
java
|
clients/src/main/java/org/apache/kafka/common/record/MemoryRecordsBuilder.java
| 899
|
[] | true
| 2
| 6.96
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
to_clipboard
|
def to_clipboard(
obj, excel: bool | None = True, sep: str | None = None, **kwargs
) -> None: # pragma: no cover
"""
Attempt to write text representation of object to the system clipboard
The clipboard can be then pasted into Excel for example.
Parameters
----------
obj : the object to write to the clipboard
excel : bool, defaults to True
if True, use the provided separator, writing in a csv
format for allowing easy pasting into excel.
if False, write a string representation of the object
to the clipboard
sep : optional, defaults to tab
other keywords are passed to to_csv
Notes
-----
Requirements for your platform
- Linux: xclip, or xsel (with PyQt4 modules)
- Windows:
- OS X:
"""
encoding = kwargs.pop("encoding", "utf-8")
# testing if an invalid encoding is passed to clipboard
if encoding is not None and encoding.lower().replace("-", "") != "utf8":
raise ValueError("clipboard only supports utf-8 encoding")
from pandas.io.clipboard import clipboard_set
if excel is None:
excel = True
if excel:
try:
if sep is None:
sep = "\t"
buf = StringIO()
# clipboard_set (pyperclip) expects unicode
obj.to_csv(buf, sep=sep, encoding="utf-8", **kwargs)
text = buf.getvalue()
clipboard_set(text)
return
except TypeError:
warnings.warn(
"to_clipboard in excel mode requires a single character separator.",
stacklevel=find_stack_level(),
)
elif sep is not None:
warnings.warn(
"to_clipboard with excel=False ignores the sep argument.",
stacklevel=find_stack_level(),
)
if isinstance(obj, ABCDataFrame):
# str(df) has various unhelpful defaults, like truncation
with option_context("display.max_colwidth", None):
objstr = obj.to_string(**kwargs)
else:
objstr = str(obj)
clipboard_set(objstr)
|
Attempt to write text representation of object to the system clipboard
The clipboard can be then pasted into Excel for example.
Parameters
----------
obj : the object to write to the clipboard
excel : bool, defaults to True
if True, use the provided separator, writing in a csv
format for allowing easy pasting into excel.
if False, write a string representation of the object
to the clipboard
sep : optional, defaults to tab
other keywords are passed to to_csv
Notes
-----
Requirements for your platform
- Linux: xclip, or xsel (with PyQt4 modules)
- Windows:
- OS X:
|
python
|
pandas/io/clipboards.py
| 135
|
[
"obj",
"excel",
"sep"
] |
None
| true
| 9
| 6.64
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
getBeansWithAnnotation
|
@Override
public Map<String, Object> getBeansWithAnnotation(Class<? extends Annotation> annotationType) {
String[] beanNames = getBeanNamesForAnnotation(annotationType);
Map<String, Object> result = CollectionUtils.newLinkedHashMap(beanNames.length);
for (String beanName : beanNames) {
Object beanInstance = getBean(beanName);
if (!(beanInstance instanceof NullBean)) {
result.put(beanName, beanInstance);
}
}
return result;
}
|
Check whether the specified bean would need to be eagerly initialized
in order to determine its type.
@param factoryBeanName a factory-bean reference that the bean definition
defines a factory method for
@return whether eager initialization is necessary
|
java
|
spring-beans/src/main/java/org/springframework/beans/factory/support/DefaultListableBeanFactory.java
| 786
|
[
"annotationType"
] | true
| 2
| 7.6
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
validate_indices
|
def validate_indices(indices: np.ndarray, n: int) -> None:
"""
Perform bounds-checking for an indexer.
-1 is allowed for indicating missing values.
Parameters
----------
indices : ndarray
n : int
Length of the array being indexed.
Raises
------
ValueError
Examples
--------
>>> validate_indices(np.array([1, 2]), 3) # OK
>>> validate_indices(np.array([1, -2]), 3)
Traceback (most recent call last):
...
ValueError: negative dimensions are not allowed
>>> validate_indices(np.array([1, 2, 3]), 3)
Traceback (most recent call last):
...
IndexError: indices are out-of-bounds
>>> validate_indices(np.array([-1, -1]), 0) # OK
>>> validate_indices(np.array([0, 1]), 0)
Traceback (most recent call last):
...
IndexError: indices are out-of-bounds
"""
if len(indices):
min_idx = indices.min()
if min_idx < -1:
msg = f"'indices' contains values less than allowed ({min_idx} < -1)"
raise ValueError(msg)
max_idx = indices.max()
if max_idx >= n:
raise IndexError("indices are out-of-bounds")
|
Perform bounds-checking for an indexer.
-1 is allowed for indicating missing values.
Parameters
----------
indices : ndarray
n : int
Length of the array being indexed.
Raises
------
ValueError
Examples
--------
>>> validate_indices(np.array([1, 2]), 3) # OK
>>> validate_indices(np.array([1, -2]), 3)
Traceback (most recent call last):
...
ValueError: negative dimensions are not allowed
>>> validate_indices(np.array([1, 2, 3]), 3)
Traceback (most recent call last):
...
IndexError: indices are out-of-bounds
>>> validate_indices(np.array([-1, -1]), 0) # OK
>>> validate_indices(np.array([0, 1]), 0)
Traceback (most recent call last):
...
IndexError: indices are out-of-bounds
|
python
|
pandas/core/indexers/utils.py
| 189
|
[
"indices",
"n"
] |
None
| true
| 4
| 7.84
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
arccos
|
def arccos(x):
"""
Compute the inverse cosine of x.
Return the "principal value" (for a description of this, see
`numpy.arccos`) of the inverse cosine of `x`. For real `x` such that
`abs(x) <= 1`, this is a real number in the closed interval
:math:`[0, \\pi]`. Otherwise, the complex principle value is returned.
Parameters
----------
x : array_like or scalar
The value(s) whose arccos is (are) required.
Returns
-------
out : ndarray or scalar
The inverse cosine(s) of the `x` value(s). If `x` was a scalar, so
is `out`, otherwise an array object is returned.
See Also
--------
numpy.arccos
Notes
-----
For an arccos() that returns ``NAN`` when real `x` is not in the
interval ``[-1,1]``, use `numpy.arccos`.
Examples
--------
>>> import numpy as np
>>> np.set_printoptions(precision=4)
>>> np.emath.arccos(1) # a scalar is returned
0.0
>>> np.emath.arccos([1,2])
array([0.-0.j , 0.-1.317j])
"""
x = _fix_real_abs_gt_1(x)
return nx.arccos(x)
|
Compute the inverse cosine of x.
Return the "principal value" (for a description of this, see
`numpy.arccos`) of the inverse cosine of `x`. For real `x` such that
`abs(x) <= 1`, this is a real number in the closed interval
:math:`[0, \\pi]`. Otherwise, the complex principle value is returned.
Parameters
----------
x : array_like or scalar
The value(s) whose arccos is (are) required.
Returns
-------
out : ndarray or scalar
The inverse cosine(s) of the `x` value(s). If `x` was a scalar, so
is `out`, otherwise an array object is returned.
See Also
--------
numpy.arccos
Notes
-----
For an arccos() that returns ``NAN`` when real `x` is not in the
interval ``[-1,1]``, use `numpy.arccos`.
Examples
--------
>>> import numpy as np
>>> np.set_printoptions(precision=4)
>>> np.emath.arccos(1) # a scalar is returned
0.0
>>> np.emath.arccos([1,2])
array([0.-0.j , 0.-1.317j])
|
python
|
numpy/lib/_scimath_impl.py
| 496
|
[
"x"
] | false
| 1
| 6.32
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
flatnotmasked_edges
|
def flatnotmasked_edges(a):
"""
Find the indices of the first and last unmasked values.
Expects a 1-D `MaskedArray`, returns None if all values are masked.
Parameters
----------
a : array_like
Input 1-D `MaskedArray`
Returns
-------
edges : ndarray or None
The indices of first and last non-masked value in the array.
Returns None if all values are masked.
See Also
--------
flatnotmasked_contiguous, notmasked_contiguous, notmasked_edges
clump_masked, clump_unmasked
Notes
-----
Only accepts 1-D arrays.
Examples
--------
>>> import numpy as np
>>> a = np.ma.arange(10)
>>> np.ma.flatnotmasked_edges(a)
array([0, 9])
>>> mask = (a < 3) | (a > 8) | (a == 5)
>>> a[mask] = np.ma.masked
>>> np.array(a[~a.mask])
array([3, 4, 6, 7, 8])
>>> np.ma.flatnotmasked_edges(a)
array([3, 8])
>>> a[:] = np.ma.masked
>>> print(np.ma.flatnotmasked_edges(a))
None
"""
m = getmask(a)
if m is nomask or not np.any(m):
return np.array([0, a.size - 1])
unmasked = np.flatnonzero(~m)
if len(unmasked) > 0:
return unmasked[[0, -1]]
else:
return None
|
Find the indices of the first and last unmasked values.
Expects a 1-D `MaskedArray`, returns None if all values are masked.
Parameters
----------
a : array_like
Input 1-D `MaskedArray`
Returns
-------
edges : ndarray or None
The indices of first and last non-masked value in the array.
Returns None if all values are masked.
See Also
--------
flatnotmasked_contiguous, notmasked_contiguous, notmasked_edges
clump_masked, clump_unmasked
Notes
-----
Only accepts 1-D arrays.
Examples
--------
>>> import numpy as np
>>> a = np.ma.arange(10)
>>> np.ma.flatnotmasked_edges(a)
array([0, 9])
>>> mask = (a < 3) | (a > 8) | (a == 5)
>>> a[mask] = np.ma.masked
>>> np.array(a[~a.mask])
array([3, 4, 6, 7, 8])
>>> np.ma.flatnotmasked_edges(a)
array([3, 8])
>>> a[:] = np.ma.masked
>>> print(np.ma.flatnotmasked_edges(a))
None
|
python
|
numpy/ma/extras.py
| 1,869
|
[
"a"
] | false
| 5
| 7.52
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
getMetadata
|
public CandidateComponentsMetadata getMetadata() {
CandidateComponentsMetadata metadata = new CandidateComponentsMetadata();
for (ItemMetadata item : this.metadataItems) {
metadata.add(item);
}
if (this.previousMetadata != null) {
List<ItemMetadata> items = this.previousMetadata.getItems();
for (ItemMetadata item : items) {
if (shouldBeMerged(item)) {
metadata.add(item);
}
}
}
return metadata;
}
|
Create a new {@code MetadataProcessor} instance.
@param processingEnvironment the processing environment of the build
@param previousMetadata any previous metadata or {@code null}
|
java
|
spring-context-indexer/src/main/java/org/springframework/context/index/processor/MetadataCollector.java
| 78
|
[] |
CandidateComponentsMetadata
| true
| 3
| 6.08
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
of
|
@Contract("null, _ -> null; !null, _ -> !null")
public static @Nullable OriginTrackedValue of(@Nullable Object value, @Nullable Origin origin) {
if (value == null) {
return null;
}
if (value instanceof CharSequence charSequence) {
return new OriginTrackedCharSequence(charSequence, origin);
}
return new OriginTrackedValue(value, origin);
}
|
Create an {@link OriginTrackedValue} containing the specified {@code value} and
{@code origin}. If the source value implements {@link CharSequence} then so will
the resulting {@link OriginTrackedValue}.
@param value the source value
@param origin the origin
@return an {@link OriginTrackedValue} or {@code null} if the source value was
{@code null}.
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/origin/OriginTrackedValue.java
| 89
|
[
"value",
"origin"
] |
OriginTrackedValue
| true
| 3
| 7.6
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
ensureValid
|
public void ensureValid() {
if (sizeInBytes() < RECORD_OVERHEAD_V0)
throw new CorruptRecordException("Record is corrupt (crc could not be retrieved as the record is too "
+ "small, size = " + sizeInBytes() + ")");
if (!isValid())
throw new CorruptRecordException("Record is corrupt (stored crc = " + checksum()
+ ", computed crc = " + computeChecksum() + ")");
}
|
Throw an CorruptRecordException if isValid is false for this record
|
java
|
clients/src/main/java/org/apache/kafka/common/record/LegacyRecord.java
| 129
|
[] |
void
| true
| 3
| 6.24
|
apache/kafka
| 31,560
|
javadoc
| false
|
getListByRange
|
function getListByRange(start: number, end: number, node: Node, sourceFile: SourceFile): NodeArray<Node> | undefined {
switch (node.kind) {
case SyntaxKind.TypeReference:
return getList((node as TypeReferenceNode).typeArguments);
case SyntaxKind.ObjectLiteralExpression:
return getList((node as ObjectLiteralExpression).properties);
case SyntaxKind.ArrayLiteralExpression:
return getList((node as ArrayLiteralExpression).elements);
case SyntaxKind.TypeLiteral:
return getList((node as TypeLiteralNode).members);
case SyntaxKind.FunctionDeclaration:
case SyntaxKind.FunctionExpression:
case SyntaxKind.ArrowFunction:
case SyntaxKind.MethodDeclaration:
case SyntaxKind.MethodSignature:
case SyntaxKind.CallSignature:
case SyntaxKind.Constructor:
case SyntaxKind.ConstructorType:
case SyntaxKind.ConstructSignature:
return getList((node as SignatureDeclaration).typeParameters) || getList((node as SignatureDeclaration).parameters);
case SyntaxKind.GetAccessor:
return getList((node as GetAccessorDeclaration).parameters);
case SyntaxKind.ClassDeclaration:
case SyntaxKind.ClassExpression:
case SyntaxKind.InterfaceDeclaration:
case SyntaxKind.TypeAliasDeclaration:
case SyntaxKind.JSDocTemplateTag:
return getList((node as ClassDeclaration | ClassExpression | InterfaceDeclaration | TypeAliasDeclaration | JSDocTemplateTag).typeParameters);
case SyntaxKind.NewExpression:
case SyntaxKind.CallExpression:
return getList((node as CallExpression).typeArguments) || getList((node as CallExpression).arguments);
case SyntaxKind.VariableDeclarationList:
return getList((node as VariableDeclarationList).declarations);
case SyntaxKind.NamedImports:
case SyntaxKind.NamedExports:
return getList((node as NamedImportsOrExports).elements);
case SyntaxKind.ObjectBindingPattern:
case SyntaxKind.ArrayBindingPattern:
return getList((node as ObjectBindingPattern | ArrayBindingPattern).elements);
}
function getList(list: NodeArray<Node> | undefined): NodeArray<Node> | undefined {
return list && rangeContainsStartEnd(getVisualListRange(node, list, sourceFile), start, end) ? list : undefined;
}
}
|
@param assumeNewLineBeforeCloseBrace
`false` when called on text from a real source file.
`true` when we need to assume `position` is on a newline.
This is useful for codefixes. Consider
```
function f() {
|}
```
with `position` at `|`.
When inserting some text after an open brace, we would like to get indentation as if a newline was already there.
By default indentation at `position` will be 0 so 'assumeNewLineBeforeCloseBrace' overrides this behavior.
|
typescript
|
src/services/formatting/smartIndenter.ts
| 489
|
[
"start",
"end",
"node",
"sourceFile"
] | true
| 5
| 8.32
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
readObject
|
private JSONObject readObject() throws JSONException {
JSONObject result = new JSONObject();
/* Peek to see if this is the empty object. */
int first = nextCleanInternal();
if (first == '}') {
return result;
}
else if (first != -1) {
this.pos--;
}
while (true) {
Object name = nextValue();
if (!(name instanceof String)) {
if (name == null) {
throw syntaxError("Names cannot be null");
}
else {
throw syntaxError(
"Names must be strings, but " + name + " is of type " + name.getClass().getName());
}
}
/*
* Expect the name/value separator to be either a colon ':', an equals sign
* '=', or an arrow "=>". The last two are bogus but we include them because
* that's what the original implementation did.
*/
int separator = nextCleanInternal();
if (separator != ':' && separator != '=') {
throw syntaxError("Expected ':' after " + name);
}
if (this.pos < this.in.length() && this.in.charAt(this.pos) == '>') {
this.pos++;
}
result.put((String) name, nextValue());
switch (nextCleanInternal()) {
case '}':
return result;
case ';', ',':
continue;
default:
throw syntaxError("Unterminated object");
}
}
}
|
Reads a sequence of key/value pairs and the trailing closing brace '}' of an
object. The opening brace '{' should have already been read.
@return an object
@throws JSONException if processing of json failed
|
java
|
cli/spring-boot-cli/src/json-shade/java/org/springframework/boot/cli/json/JSONTokener.java
| 354
|
[] |
JSONObject
| true
| 10
| 8.4
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
init_gradient_and_hessian
|
def init_gradient_and_hessian(self, n_samples, dtype=np.float64, order="F"):
"""Initialize arrays for gradients and hessians.
Unless hessians are constant, arrays are initialized with undefined values.
Parameters
----------
n_samples : int
The number of samples, usually passed to `fit()`.
dtype : {np.float64, np.float32}, default=np.float64
The dtype of the arrays gradient and hessian.
order : {'C', 'F'}, default='F'
Order of the arrays gradient and hessian. The default 'F' makes the arrays
contiguous along samples.
Returns
-------
gradient : C-contiguous array of shape (n_samples,) or array of shape \
(n_samples, n_classes)
Empty array (allocated but not initialized) to be used as argument
gradient_out.
hessian : C-contiguous array of shape (n_samples,), array of shape
(n_samples, n_classes) or shape (1,)
Empty (allocated but not initialized) array to be used as argument
hessian_out.
If constant_hessian is True (e.g. `HalfSquaredError`), the array is
initialized to ``1``.
"""
if dtype not in (np.float32, np.float64):
raise ValueError(
"Valid options for 'dtype' are np.float32 and np.float64. "
f"Got dtype={dtype} instead."
)
if self.is_multiclass:
shape = (n_samples, self.n_classes)
else:
shape = (n_samples,)
gradient = np.empty(shape=shape, dtype=dtype, order=order)
if self.constant_hessian:
# If the hessians are constant, we consider them equal to 1.
# - This is correct for HalfSquaredError
# - For AbsoluteError, hessians are actually 0, but they are
# always ignored anyway.
hessian = np.ones(shape=(1,), dtype=dtype)
else:
hessian = np.empty(shape=shape, dtype=dtype, order=order)
return gradient, hessian
|
Initialize arrays for gradients and hessians.
Unless hessians are constant, arrays are initialized with undefined values.
Parameters
----------
n_samples : int
The number of samples, usually passed to `fit()`.
dtype : {np.float64, np.float32}, default=np.float64
The dtype of the arrays gradient and hessian.
order : {'C', 'F'}, default='F'
Order of the arrays gradient and hessian. The default 'F' makes the arrays
contiguous along samples.
Returns
-------
gradient : C-contiguous array of shape (n_samples,) or array of shape \
(n_samples, n_classes)
Empty array (allocated but not initialized) to be used as argument
gradient_out.
hessian : C-contiguous array of shape (n_samples,), array of shape
(n_samples, n_classes) or shape (1,)
Empty (allocated but not initialized) array to be used as argument
hessian_out.
If constant_hessian is True (e.g. `HalfSquaredError`), the array is
initialized to ``1``.
|
python
|
sklearn/_loss/loss.py
| 477
|
[
"self",
"n_samples",
"dtype",
"order"
] | false
| 6
| 6.08
|
scikit-learn/scikit-learn
| 64,340
|
numpy
| false
|
|
detect
|
public static PeriodStyle detect(String value) {
Assert.notNull(value, "'value' must not be null");
for (PeriodStyle candidate : values()) {
if (candidate.matches(value)) {
return candidate;
}
}
throw new IllegalArgumentException("'" + value + "' is not a valid period");
}
|
Detect the style from the given source value.
@param value the source value
@return the period style
@throws IllegalArgumentException if the value is not a known style
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/convert/PeriodStyle.java
| 208
|
[
"value"
] |
PeriodStyle
| true
| 2
| 7.92
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
print_job_logs
|
def print_job_logs(
self,
job_name: str,
run_id: str,
continuation_tokens: LogContinuationTokens,
):
"""
Print the latest job logs to the Airflow task log and updates the continuation tokens.
:param continuation_tokens: the tokens where to resume from when reading logs.
The object gets updated with the new tokens by this method.
"""
log_client = self.logs_hook.get_conn()
paginator = log_client.get_paginator("filter_log_events")
job_run = self.conn.get_job_run(JobName=job_name, RunId=run_id)["JobRun"]
# StartTime needs to be an int and is Epoch time in milliseconds
start_time = int(job_run["StartedOn"].timestamp() * 1000)
def display_logs_from(log_group: str, continuation_token: str | None) -> str | None:
"""Mutualize iteration over the 2 different log streams glue jobs write to."""
fetched_logs = []
next_token = continuation_token
try:
for response in paginator.paginate(
logGroupName=log_group,
logStreamNames=[run_id],
startTime=start_time,
PaginationConfig={"StartingToken": continuation_token},
):
fetched_logs.extend([event["message"] for event in response["events"]])
# if the response is empty there is no nextToken in it
next_token = response.get("nextToken") or next_token
except ClientError as e:
if e.response["Error"]["Code"] == "ResourceNotFoundException":
# we land here when the log groups/streams don't exist yet
self.log.warning(
"No new Glue driver logs so far.\n"
"If this persists, check the CloudWatch dashboard at: %r.",
f"https://{self.conn_region_name}.console.aws.amazon.com/cloudwatch/home",
)
else:
raise
if len(fetched_logs):
# Add a tab to indent those logs and distinguish them from airflow logs.
# Log lines returned already contain a newline character at the end.
messages = "\t".join(fetched_logs)
self.log.info("Glue Job Run %s Logs:\n\t%s", log_group, messages)
else:
self.log.info("No new log from the Glue Job in %s", log_group)
return next_token
log_group_prefix = job_run["LogGroupName"]
log_group_default = f"{log_group_prefix}/{DEFAULT_LOG_SUFFIX}"
log_group_error = f"{log_group_prefix}/{ERROR_LOG_SUFFIX}"
# one would think that the error log group would contain only errors, but it actually contains
# a lot of interesting logs too, so it's valuable to have both
continuation_tokens.output_stream_continuation = display_logs_from(
log_group_default, continuation_tokens.output_stream_continuation
)
continuation_tokens.error_stream_continuation = display_logs_from(
log_group_error, continuation_tokens.error_stream_continuation
)
|
Print the latest job logs to the Airflow task log and updates the continuation tokens.
:param continuation_tokens: the tokens where to resume from when reading logs.
The object gets updated with the new tokens by this method.
|
python
|
providers/amazon/src/airflow/providers/amazon/aws/hooks/glue.py
| 310
|
[
"self",
"job_name",
"run_id",
"continuation_tokens"
] | true
| 7
| 6.8
|
apache/airflow
| 43,597
|
sphinx
| false
|
|
handlePendingDisconnects
|
private void handlePendingDisconnects() {
lock.lock();
try {
while (true) {
Node node = pendingDisconnects.poll();
if (node == null)
break;
failUnsentRequests(node, DisconnectException.INSTANCE);
client.disconnect(node.idString());
}
} finally {
lock.unlock();
}
}
|
Check whether there is pending request. This includes both requests that
have been transmitted (i.e. in-flight requests) and those which are awaiting transmission.
@return A boolean indicating whether there is pending request
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java
| 459
|
[] |
void
| true
| 3
| 7.92
|
apache/kafka
| 31,560
|
javadoc
| false
|
substringAfter
|
public static String substringAfter(final String str, final int find) {
if (isEmpty(str)) {
return str;
}
final int pos = str.indexOf(find);
if (pos == INDEX_NOT_FOUND) {
return EMPTY;
}
return str.substring(pos + 1);
}
|
Gets the substring after the first occurrence of a separator. The separator is not returned.
<p>
A {@code null} string input will return {@code null}. An empty ("") string input will return the empty string.
</p>
<p>
If nothing is found, the empty string is returned.
</p>
<pre>
StringUtils.substringAfter(null, *) = null
StringUtils.substringAfter("", *) = ""
StringUtils.substringAfter("abc", 'a') = "bc"
StringUtils.substringAfter("abcba", 'b') = "cba"
StringUtils.substringAfter("abc", 'c') = ""
StringUtils.substringAfter("abc", 'd') = ""
StringUtils.substringAfter(" abc", 32) = "abc"
</pre>
@param str the String to get a substring from, may be null.
@param find the character (Unicode code point) to find.
@return the substring after the first occurrence of the specified character, {@code null} if null String input.
@since 3.11
|
java
|
src/main/java/org/apache/commons/lang3/StringUtils.java
| 8,204
|
[
"str",
"find"
] |
String
| true
| 3
| 7.76
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
findPropertyType
|
public static Class<?> findPropertyType(String propertyName, Class<?> @Nullable ... beanClasses) {
if (beanClasses != null) {
for (Class<?> beanClass : beanClasses) {
PropertyDescriptor pd = getPropertyDescriptor(beanClass, propertyName);
if (pd != null) {
return pd.getPropertyType();
}
}
}
return Object.class;
}
|
Determine the bean property type for the given property from the
given classes/interfaces, if possible.
@param propertyName the name of the bean property
@param beanClasses the classes to check against
@return the property type, or {@code Object.class} as fallback
|
java
|
spring-beans/src/main/java/org/springframework/beans/BeanUtils.java
| 599
|
[
"propertyName"
] | true
| 3
| 7.76
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
parse
|
public static CacheBuilderSpec parse(String cacheBuilderSpecification) {
CacheBuilderSpec spec = new CacheBuilderSpec(cacheBuilderSpecification);
if (!cacheBuilderSpecification.isEmpty()) {
for (String keyValuePair : KEYS_SPLITTER.split(cacheBuilderSpecification)) {
List<String> keyAndValue = ImmutableList.copyOf(KEY_VALUE_SPLITTER.split(keyValuePair));
checkArgument(!keyAndValue.isEmpty(), "blank key-value pair");
checkArgument(
keyAndValue.size() <= 2,
"key-value pair %s with more than one equals sign",
keyValuePair);
// Find the ValueParser for the current key.
String key = keyAndValue.get(0);
ValueParser valueParser = VALUE_PARSERS.get(key);
checkArgument(valueParser != null, "unknown key %s", key);
String value = keyAndValue.size() == 1 ? null : keyAndValue.get(1);
valueParser.parse(spec, key, value);
}
}
return spec;
}
|
Creates a CacheBuilderSpec from a string.
@param cacheBuilderSpecification the string form
|
java
|
android/guava/src/com/google/common/cache/CacheBuilderSpec.java
| 141
|
[
"cacheBuilderSpecification"
] |
CacheBuilderSpec
| true
| 3
| 6.56
|
google/guava
| 51,352
|
javadoc
| false
|
_equal_values
|
def _equal_values(self, other: Self) -> bool:
"""
Used in .equals defined in base class. Only check the column values
assuming shape and indexes have already been checked.
"""
# For SingleBlockManager (i.e.Series)
if other.ndim != 1:
return False
left = self.blocks[0].values
right = other.blocks[0].values
return array_equals(left, right)
|
Used in .equals defined in base class. Only check the column values
assuming shape and indexes have already been checked.
|
python
|
pandas/core/internals/managers.py
| 2,231
|
[
"self",
"other"
] |
bool
| true
| 2
| 6
|
pandas-dev/pandas
| 47,362
|
unknown
| false
|
createInstance
|
protected abstract F createInstance(String pattern, TimeZone timeZone, Locale locale);
|
Create a format instance using the specified pattern, time zone
and locale.
@param pattern {@link java.text.SimpleDateFormat} compatible pattern, this will not be null.
@param timeZone time zone, this will not be null.
@param locale locale, this will not be null.
@return a pattern based date/time formatter.
@throws IllegalArgumentException if pattern is invalid or {@code null}.
|
java
|
src/main/java/org/apache/commons/lang3/time/AbstractFormatCache.java
| 143
|
[
"pattern",
"timeZone",
"locale"
] |
F
| true
| 1
| 6.48
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
sendSyncGroupRequest
|
private RequestFuture<ByteBuffer> sendSyncGroupRequest(SyncGroupRequest.Builder requestBuilder) {
if (coordinatorUnknown())
return RequestFuture.coordinatorNotAvailable();
return client.send(coordinator, requestBuilder)
.compose(new SyncGroupResponseHandler(generation));
}
|
Join the group and return the assignment for the next generation. This function handles both
JoinGroup and SyncGroup, delegating to {@link #onLeaderElected(String, String, List, boolean)} if
elected leader by the coordinator.
NOTE: This is visible only for testing
@return A request future which wraps the assignment returned from the group leader
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java
| 806
|
[
"requestBuilder"
] | true
| 2
| 7.76
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
getErrorPath
|
private @Nullable String getErrorPath(Map<Integer, String> map, Integer status) {
if (map.containsKey(status)) {
return map.get(status);
}
return this.global;
}
|
Return the description for the given request. By default this method will return a
description based on the request {@code servletPath} and {@code pathInfo}.
@param request the source request
@return the description
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/web/servlet/support/ErrorPageFilter.java
| 238
|
[
"map",
"status"
] |
String
| true
| 2
| 7.92
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
replaceAll
|
@Override
public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {
checkNotNull(function);
Node<K, V> oldFirst = firstInKeyInsertionOrder;
clear();
for (Node<K, V> node = oldFirst; node != null; node = node.nextInKeyInsertionOrder) {
put(node.key, function.apply(node.key, node.value));
}
}
|
Returns {@code true} if this BiMap contains an entry whose value is equal to {@code value} (or,
equivalently, if this inverse view contains a key that is equal to {@code value}).
<p>Due to the property that values in a BiMap are unique, this will tend to execute in
faster-than-linear time.
@param value the object to search for in the values of this BiMap
@return true if a mapping exists from a key to the specified value
|
java
|
guava/src/com/google/common/collect/HashBiMap.java
| 598
|
[
"function"
] |
void
| true
| 2
| 7.92
|
google/guava
| 51,352
|
javadoc
| false
|
isExcludedFromDependencyCheck
|
protected boolean isExcludedFromDependencyCheck(PropertyDescriptor pd) {
return (AutowireUtils.isExcludedFromDependencyCheck(pd) ||
this.ignoredDependencyTypes.contains(pd.getPropertyType()) ||
AutowireUtils.isSetterDefinedInInterface(pd, this.ignoredDependencyInterfaces));
}
|
Determine whether the given bean property is excluded from dependency checks.
<p>This implementation excludes properties defined by CGLIB and
properties whose type matches an ignored dependency type or which
are defined by an ignored dependency interface.
@param pd the PropertyDescriptor of the bean property
@return whether the bean property is excluded
@see #ignoreDependencyType(Class)
@see #ignoreDependencyInterface(Class)
|
java
|
spring-beans/src/main/java/org/springframework/beans/factory/support/AbstractAutowireCapableBeanFactory.java
| 1,618
|
[
"pd"
] | true
| 3
| 7.28
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
stream
|
public Stream<ConditionAndOutcome> stream() {
return StreamSupport.stream(spliterator(), false);
}
|
Return a {@link Stream} of the {@link ConditionAndOutcome} items.
@return a stream of the {@link ConditionAndOutcome} items.
@since 3.5.0
|
java
|
core/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/condition/ConditionEvaluationReport.java
| 249
|
[] | true
| 1
| 6.32
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
|
_tasks_by_type
|
def _tasks_by_type(self, name, limit=None, reverse=True):
"""Get all tasks by type.
This is slower than accessing :attr:`tasks_by_type`,
but will be ordered by time.
Returns:
Generator: giving ``(uuid, Task)`` pairs.
"""
return islice(
((uuid, task) for uuid, task in self.tasks_by_time(reverse=reverse)
if task.name == name),
0, limit,
)
|
Get all tasks by type.
This is slower than accessing :attr:`tasks_by_type`,
but will be ordered by time.
Returns:
Generator: giving ``(uuid, Task)`` pairs.
|
python
|
celery/events/state.py
| 676
|
[
"self",
"name",
"limit",
"reverse"
] | false
| 1
| 6.08
|
celery/celery
| 27,741
|
unknown
| false
|
|
_get_index_str
|
def _get_index_str(self, index: sympy.Expr) -> str:
"""
Convert an index expression to a string suitable for Pallas indexing.
Pallas operates on full arrays, so we need to convert index expressions
to JAX array slicing. For example:
- x0 -> "..." (contiguous access, full array)
- 2*x0 -> "::2" (strided access with stride 2)
- 2*x0 + 1 -> "1::2" (strided access with offset 1, stride 2)
Args:
index: The indexing expression to convert
Returns:
The indexing string to use in generated code
"""
# Prepare and simplify the index
prepared_index = self.prepare_indexing(index)
# Note: Block variable detection (im2col patterns) is handled in load()/store()
# where we have access to buffer dimensions. We check the buffer size
# against iteration variables there to detect gather patterns.
# For simple single-symbol access (contiguous case), we can use [...]
# which is more efficient as it operates on the entire array at once
if isinstance(prepared_index, sympy.Symbol):
return "..."
elif prepared_index.is_Integer:
# Scalar index
return str(prepared_index)
else:
# Complex expression (strided/scatter access)
# Try to extract stride and offset for common patterns
return self._convert_to_jax_slice(prepared_index)
|
Convert an index expression to a string suitable for Pallas indexing.
Pallas operates on full arrays, so we need to convert index expressions
to JAX array slicing. For example:
- x0 -> "..." (contiguous access, full array)
- 2*x0 -> "::2" (strided access with stride 2)
- 2*x0 + 1 -> "1::2" (strided access with offset 1, stride 2)
Args:
index: The indexing expression to convert
Returns:
The indexing string to use in generated code
|
python
|
torch/_inductor/codegen/pallas.py
| 856
|
[
"self",
"index"
] |
str
| true
| 4
| 8.08
|
pytorch/pytorch
| 96,034
|
google
| false
|
topicIdPartitionsToLogString
|
private String topicIdPartitionsToLogString(Collection<TopicIdPartition> partitions) {
if (!log.isTraceEnabled()) {
return String.format("%d partition(s)", partitions.size());
}
return "(" + partitions.stream().map(TopicIdPartition::toString).collect(Collectors.joining(", ")) + ")";
}
|
A builder that allows for presizing the PartitionData hashmap, and avoiding making a
secondary copy of the sessionPartitions, in cases where this is not necessarily.
This builder is primarily for use by the Replica Fetcher
@param size the initial size of the PartitionData hashmap
@param copySessionPartitions boolean denoting whether the builder should make a deep copy of
session partitions
|
java
|
clients/src/main/java/org/apache/kafka/clients/FetchSessionHandler.java
| 399
|
[
"partitions"
] |
String
| true
| 2
| 6.4
|
apache/kafka
| 31,560
|
javadoc
| false
|
parseUnionOrIntersectionType
|
function parseUnionOrIntersectionType(
operator: SyntaxKind.BarToken | SyntaxKind.AmpersandToken,
parseConstituentType: () => TypeNode,
createTypeNode: (types: NodeArray<TypeNode>) => UnionOrIntersectionTypeNode,
): TypeNode {
const pos = getNodePos();
const isUnionType = operator === SyntaxKind.BarToken;
const hasLeadingOperator = parseOptional(operator);
let type = hasLeadingOperator && parseFunctionOrConstructorTypeToError(isUnionType)
|| parseConstituentType();
if (token() === operator || hasLeadingOperator) {
const types = [type];
while (parseOptional(operator)) {
types.push(parseFunctionOrConstructorTypeToError(isUnionType) || parseConstituentType());
}
type = finishNode(createTypeNode(createNodeArray(types, pos)), pos);
}
return type;
}
|
Reports a diagnostic error for the current token being an invalid name.
@param blankDiagnostic Diagnostic to report for the case of the name being blank (matched tokenIfBlankName).
@param nameDiagnostic Diagnostic to report for all other cases.
@param tokenIfBlankName Current token if the name was invalid for being blank (not provided / skipped).
|
typescript
|
src/compiler/parser.ts
| 4,819
|
[
"operator",
"parseConstituentType",
"createTypeNode"
] | true
| 7
| 6.72
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.