metadata_version
string
name
string
version
string
summary
string
description
string
description_content_type
string
author
string
author_email
string
maintainer
string
maintainer_email
string
license
string
keywords
string
classifiers
list
platform
list
home_page
string
download_url
string
requires_python
string
requires
list
provides
list
obsoletes
list
requires_dist
list
provides_dist
list
obsoletes_dist
list
requires_external
list
project_urls
list
uploaded_via
string
upload_time
timestamp[us]
filename
string
size
int64
path
string
python_version
string
packagetype
string
comment_text
string
has_signature
bool
md5_digest
string
sha256_digest
string
blake2_256_digest
string
license_expression
string
license_files
list
2.1
stache-ai-dynamodb
0.1.1
Dynamodb provider for Stache AI
# stache-ai-dynamodb DynamoDB providers for [Stache AI](https://github.com/stache-ai/stache-ai) - serverless namespace registry and document index for AWS Lambda deployments. ## Installation ```bash pip install stache-ai-dynamodb ``` ## Providers This package includes two providers: | Provider | Type | Description | |----------|------|-------------| | `dynamodb` | Namespace | Hierarchical namespace registry with parent-child relationships | | `dynamodb` | Document Index | Document metadata storage with namespace filtering | ## Configuration ```python from stache_ai.config import Settings settings = Settings( namespace_provider="dynamodb", dynamodb_namespace_table="my-namespaces", # Required # Optional: Document index enable_document_index=True, document_index_provider="dynamodb", dynamodb_document_table="my-documents", ) ``` ### Environment Variables | Variable | Description | Default | |----------|-------------|---------| | `NAMESPACE_PROVIDER` | Set to `dynamodb` | `none` | | `DYNAMODB_NAMESPACE_TABLE` | Namespace table name | Required | | `DOCUMENT_INDEX_PROVIDER` | Set to `dynamodb` | `none` | | `DYNAMODB_DOCUMENT_TABLE` | Document table name | Required if using document index | | `AWS_REGION` | AWS region | `us-east-1` | ## Table Schemas ### Namespace Table Primary key: `id` (String) GSI: `parent_id-index` on `parent_id` (String) | Attribute | Type | Description | |-----------|------|-------------| | `id` | String | Unique namespace identifier (primary key) | | `name` | String | Display name | | `description` | String | Optional description | | `parent_id` | String | Parent namespace ID (`__ROOT__` for top-level) | | `metadata` | String | JSON-encoded metadata | | `filter_keys` | String | JSON-encoded list of filter keys | | `created_at` | String | ISO 8601 timestamp | | `updated_at` | String | ISO 8601 timestamp | ### Document Table Uses single-table design with composite keys: **Primary Key:** - `PK` (String): `DOC#{namespace}#{doc_id}` - `SK` (String): `METADATA` **Global Secondary Indexes:** - `GSI1`: Namespace queries - `GSI1PK`: `NAMESPACE#{namespace}` - `GSI1SK`: `CREATED#{timestamp}` - `GSI2`: Filename lookups - `GSI2PK`: `FILENAME#{namespace}#{filename}` - `GSI2SK`: `CREATED#{timestamp}` | Attribute | Type | Description | |-----------|------|-------------| | `PK` | String | Primary key: `DOC#{namespace}#{doc_id}` | | `SK` | String | Sort key: `METADATA` | | `GSI1PK` | String | Namespace index: `NAMESPACE#{namespace}` | | `GSI1SK` | String | Created timestamp: `CREATED#{timestamp}` | | `GSI2PK` | String | Filename index: `FILENAME#{namespace}#{filename}` | | `GSI2SK` | String | Created timestamp: `CREATED#{timestamp}` | | `doc_id` | String | Document ID | | `namespace` | String | Namespace ID | | `filename` | String | Original filename | | `title` | String | Document title | | `source` | String | Source identifier | | `content_type` | String | MIME type | | `chunk_count` | Number | Number of chunks | | `metadata` | Map | Document metadata | | `created_at` | String | ISO 8601 timestamp | | `updated_at` | String | ISO 8601 timestamp | ## Infrastructure Examples ### AWS SAM Template ```yaml Resources: # Namespace registry table NamespaceTable: Type: AWS::DynamoDB::Table Properties: TableName: !Sub '${AWS::StackName}-namespaces' BillingMode: PAY_PER_REQUEST AttributeDefinitions: - AttributeName: id AttributeType: S - AttributeName: parent_id AttributeType: S KeySchema: - AttributeName: id KeyType: HASH GlobalSecondaryIndexes: - IndexName: parent_id-index KeySchema: - AttributeName: parent_id KeyType: HASH Projection: ProjectionType: ALL # Document index table (single-table design) DocumentsTable: Type: AWS::DynamoDB::Table Properties: TableName: !Sub '${AWS::StackName}-documents' BillingMode: PAY_PER_REQUEST AttributeDefinitions: - AttributeName: PK AttributeType: S - AttributeName: SK AttributeType: S - AttributeName: GSI1PK AttributeType: S - AttributeName: GSI1SK AttributeType: S - AttributeName: GSI2PK AttributeType: S - AttributeName: GSI2SK AttributeType: S KeySchema: - AttributeName: PK KeyType: HASH - AttributeName: SK KeyType: RANGE GlobalSecondaryIndexes: - IndexName: GSI1 KeySchema: - AttributeName: GSI1PK KeyType: HASH - AttributeName: GSI1SK KeyType: RANGE Projection: ProjectionType: ALL - IndexName: GSI2 KeySchema: - AttributeName: GSI2PK KeyType: HASH - AttributeName: GSI2SK KeyType: RANGE Projection: ProjectionType: ALL # Lambda function with DynamoDB access StacheFunction: Type: AWS::Serverless::Function Properties: Environment: Variables: NAMESPACE_PROVIDER: dynamodb DYNAMODB_NAMESPACE_TABLE: !Ref NamespaceTable DOCUMENT_INDEX_PROVIDER: dynamodb DYNAMODB_DOCUMENT_TABLE: !Ref DocumentsTable Policies: - DynamoDBCrudPolicy: TableName: !Ref NamespaceTable - DynamoDBCrudPolicy: TableName: !Ref DocumentsTable ``` ### Terraform ```hcl resource "aws_dynamodb_table" "namespaces" { name = "${var.prefix}-namespaces" billing_mode = "PAY_PER_REQUEST" hash_key = "id" attribute { name = "id" type = "S" } attribute { name = "parent_id" type = "S" } global_secondary_index { name = "parent_id-index" hash_key = "parent_id" projection_type = "ALL" } } resource "aws_dynamodb_table" "documents" { name = "${var.prefix}-documents" billing_mode = "PAY_PER_REQUEST" hash_key = "PK" range_key = "SK" attribute { name = "PK" type = "S" } attribute { name = "SK" type = "S" } attribute { name = "GSI1PK" type = "S" } attribute { name = "GSI1SK" type = "S" } attribute { name = "GSI2PK" type = "S" } attribute { name = "GSI2SK" type = "S" } global_secondary_index { name = "GSI1" hash_key = "GSI1PK" range_key = "GSI1SK" projection_type = "ALL" } global_secondary_index { name = "GSI2" hash_key = "GSI2PK" range_key = "GSI2SK" projection_type = "ALL" } } ``` ### AWS CLI ```bash # Create namespace table aws dynamodb create-table \ --table-name stache-namespaces \ --attribute-definitions \ AttributeName=id,AttributeType=S \ AttributeName=parent_id,AttributeType=S \ --key-schema AttributeName=id,KeyType=HASH \ --global-secondary-indexes \ 'IndexName=parent_id-index,KeySchema=[{AttributeName=parent_id,KeyType=HASH}],Projection={ProjectionType=ALL}' \ --billing-mode PAY_PER_REQUEST # Create documents table aws dynamodb create-table \ --table-name stache-documents \ --attribute-definitions \ AttributeName=PK,AttributeType=S \ AttributeName=SK,AttributeType=S \ AttributeName=GSI1PK,AttributeType=S \ AttributeName=GSI1SK,AttributeType=S \ AttributeName=GSI2PK,AttributeType=S \ AttributeName=GSI2SK,AttributeType=S \ --key-schema \ AttributeName=PK,KeyType=HASH \ AttributeName=SK,KeyType=RANGE \ --global-secondary-indexes \ 'IndexName=GSI1,KeySchema=[{AttributeName=GSI1PK,KeyType=HASH},{AttributeName=GSI1SK,KeyType=RANGE}],Projection={ProjectionType=ALL}' \ 'IndexName=GSI2,KeySchema=[{AttributeName=GSI2PK,KeyType=HASH},{AttributeName=GSI2SK,KeyType=RANGE}],Projection={ProjectionType=ALL}' \ --billing-mode PAY_PER_REQUEST ``` ## IAM Permissions The Lambda function needs these permissions on both tables: ```yaml - dynamodb:DescribeTable - dynamodb:GetItem - dynamodb:PutItem - dynamodb:UpdateItem - dynamodb:DeleteItem - dynamodb:Query - dynamodb:Scan ``` Or use the SAM `DynamoDBCrudPolicy` as shown above. ## Requirements - Python >= 3.10 - stache-ai >= 0.1.0 - boto3 >= 1.34.0
text/markdown
Stache Contributors
null
null
null
MIT
stache, rag, ai, dynamodb
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12" ]
[]
null
null
>=3.10
[]
[]
[]
[ "stache-ai<1.0.0,>=0.1.0", "boto3>=1.34.0", "pytest>=7.0.0; extra == \"dev\"", "pytest-asyncio>=0.21.0; extra == \"dev\"", "pytest-cov>=4.0.0; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/stache-ai/stache-ai", "Repository, https://github.com/stache-ai/stache-ai" ]
twine/5.0.0 CPython/3.12.3
2026-01-16T04:21:24.619027
stache_ai_dynamodb-0.1.1.tar.gz
22,441
0f/d0/83f96381fa85516827574b2b3d2462f87bf90a4ae99380eadbe537be5a4b/stache_ai_dynamodb-0.1.1.tar.gz
source
sdist
null
false
9a793fa3b3b10c75eda02756548c27ad
6a00cae09a486c5fa39a593fc5abf4d1ab85f9610efe5c1de77cb3f5ce6e40a6
0fd083f96381fa85516827574b2b3d2462f87bf90a4ae99380eadbe537be5a4b
null
[]
2.1
stache-ai-ocr
0.1.2
OCR support for Stache AI document loaders
# stache-ai-ocr OCR support for Stache AI document loaders. Provides a high-priority PDF loader that falls back to OCR for scanned documents. ## Installation ```bash pip install stache-ai-ocr apt install ocrmypdf # System dependency required ``` ## Usage Once installed, the OCR loader automatically registers and takes priority over the basic PDF loader for all PDF files. The loader will: 1. First attempt normal text extraction with pdfplumber 2. If no text is found (scanned PDF), fall back to OCR using ocrmypdf 3. Gracefully handle missing ocrmypdf (logs warning and returns empty text) ## System Requirements - **ocrmypdf** system binary must be installed - Ubuntu/Debian: `apt install ocrmypdf` - macOS: `brew install ocrmypdf` - Includes Tesseract OCR engine ## Priority Override This loader registers with priority 10, overriding the basic PDF loader (priority 0). This ensures OCR is used when available without affecting systems where it's not installed.
text/markdown
Stache Contributors
null
null
null
MIT
stache, ocr, pdf, document-processing
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12" ]
[]
null
null
>=3.10
[]
[]
[]
[ "stache-ai>=0.1.0", "pdfplumber>=0.10.0" ]
[]
[]
[]
[ "Homepage, https://github.com/stache-ai/stache-ai", "Repository, https://github.com/stache-ai/stache-ai" ]
twine/5.0.0 CPython/3.12.3
2026-01-16T04:21:28.509861
stache_ai_ocr-0.1.2-py3-none-any.whl
5,649
ca/d7/179ec962bbde4e00c0afc0294706f055e12b12273e872b73d8942e2d9e22/stache_ai_ocr-0.1.2-py3-none-any.whl
py3
bdist_wheel
null
false
ae5f86d8fd8593970b7df07193834809
a9dcfcf482d32a170cbac2a3d81d054288e645d86e60fe8e0282a26e7c3c0220
cad7179ec962bbde4e00c0afc0294706f055e12b12273e872b73d8942e2d9e22
null
[]
2.1
stache-ai-ocr
0.1.2
OCR support for Stache AI document loaders
# stache-ai-ocr OCR support for Stache AI document loaders. Provides a high-priority PDF loader that falls back to OCR for scanned documents. ## Installation ```bash pip install stache-ai-ocr apt install ocrmypdf # System dependency required ``` ## Usage Once installed, the OCR loader automatically registers and takes priority over the basic PDF loader for all PDF files. The loader will: 1. First attempt normal text extraction with pdfplumber 2. If no text is found (scanned PDF), fall back to OCR using ocrmypdf 3. Gracefully handle missing ocrmypdf (logs warning and returns empty text) ## System Requirements - **ocrmypdf** system binary must be installed - Ubuntu/Debian: `apt install ocrmypdf` - macOS: `brew install ocrmypdf` - Includes Tesseract OCR engine ## Priority Override This loader registers with priority 10, overriding the basic PDF loader (priority 0). This ensures OCR is used when available without affecting systems where it's not installed.
text/markdown
Stache Contributors
null
null
null
MIT
stache, ocr, pdf, document-processing
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12" ]
[]
null
null
>=3.10
[]
[]
[]
[ "stache-ai>=0.1.0", "pdfplumber>=0.10.0" ]
[]
[]
[]
[ "Homepage, https://github.com/stache-ai/stache-ai", "Repository, https://github.com/stache-ai/stache-ai" ]
twine/5.0.0 CPython/3.12.3
2026-01-16T04:21:29.504850
stache_ai_ocr-0.1.2.tar.gz
12,177
2f/42/2e078355a54342403823b3a5dca189a101db7c4a3511c2379a10630a0430/stache_ai_ocr-0.1.2.tar.gz
source
sdist
null
false
4171d85fdd60f7c41404a2dfef3e7bb4
9680a8574093d40ffa093838acd65de76ee8e0e01829cd0e2223cb40f55b5163
2f422e078355a54342403823b3a5dca189a101db7c4a3511c2379a10630a0430
null
[]
2.1
stache-ai-ollama
0.1.1
Ollama provider for Stache AI
# stache-ai-ollama Ollama provider for [Stache AI](https://github.com/stache-ai/stache-ai). ## Installation ```bash pip install stache-ai-ollama ``` ## Usage Install the package and configure the provider in your settings: ```python from stache_ai.config import Settings settings = Settings( llm_provider: "ollama" ) ``` The provider will be automatically discovered via entry points. ## Requirements - Python >= 3.10 - stache-ai >= 0.1.0 - httpx>=0.25.0
text/markdown
Stache Contributors
null
null
null
MIT
stache, rag, ai, ollama
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12" ]
[]
null
null
>=3.10
[]
[]
[]
[ "stache-ai<1.0.0,>=0.1.0", "httpx>=0.25.0", "pytest>=7.0.0; extra == \"dev\"", "pytest-asyncio>=0.21.0; extra == \"dev\"", "pytest-cov>=4.0.0; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/stache-ai/stache-ai", "Repository, https://github.com/stache-ai/stache-ai" ]
twine/5.0.0 CPython/3.12.3
2026-01-16T04:21:33.451930
stache_ai_ollama-0.1.1-py3-none-any.whl
14,502
c8/ca/286202ad4992bff240949d7b79d7072edb51fd9ddd897b33cbf1cd098502/stache_ai_ollama-0.1.1-py3-none-any.whl
py3
bdist_wheel
null
false
ec0f0d36f7ef09baa3f91affdf5bfe24
78d3fa5a608cd5d439a531f88f2eaab35d1dd5a287da105fe8a1eaa9a391534c
c8ca286202ad4992bff240949d7b79d7072edb51fd9ddd897b33cbf1cd098502
null
[]
2.1
stache-ai-ollama
0.1.1
Ollama provider for Stache AI
# stache-ai-ollama Ollama provider for [Stache AI](https://github.com/stache-ai/stache-ai). ## Installation ```bash pip install stache-ai-ollama ``` ## Usage Install the package and configure the provider in your settings: ```python from stache_ai.config import Settings settings = Settings( llm_provider: "ollama" ) ``` The provider will be automatically discovered via entry points. ## Requirements - Python >= 3.10 - stache-ai >= 0.1.0 - httpx>=0.25.0
text/markdown
Stache Contributors
null
null
null
MIT
stache, rag, ai, ollama
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12" ]
[]
null
null
>=3.10
[]
[]
[]
[ "stache-ai<1.0.0,>=0.1.0", "httpx>=0.25.0", "pytest>=7.0.0; extra == \"dev\"", "pytest-asyncio>=0.21.0; extra == \"dev\"", "pytest-cov>=4.0.0; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/stache-ai/stache-ai", "Repository, https://github.com/stache-ai/stache-ai" ]
twine/5.0.0 CPython/3.12.3
2026-01-16T04:21:34.546006
stache_ai_ollama-0.1.1.tar.gz
25,852
94/b0/ec1e0e81eabec725115664bf8e640471f1a8168274ed77b6f13cc24c1742/stache_ai_ollama-0.1.1.tar.gz
source
sdist
null
false
a32e022ad522f1de716bac66abadbae8
813c1748d470732c20e209654ca636f8022e770165ff4a836167ad5395f7b74e
94b0ec1e0e81eabec725115664bf8e640471f1a8168274ed77b6f13cc24c1742
null
[]
2.1
stache-ai-s3vectors
0.1.1
S3Vectors provider for Stache AI
# stache-ai-s3vectors S3 Vectors provider for [Stache AI](https://github.com/stache-ai/stache-ai) - serverless vector database using Amazon S3 Vectors. ## Installation ```bash pip install stache-ai-s3vectors ``` ## Usage Install the package and configure the provider in your settings: ```python from stache_ai.config import Settings settings = Settings( vectordb_provider="s3vectors", s3vectors_bucket_name="my-vector-bucket", # Required s3vectors_region="us-east-1", # Optional, defaults to AWS_REGION ) ``` The provider will be automatically discovered via entry points. ### Environment Variables | Variable | Description | Default | |----------|-------------|---------| | `VECTORDB_PROVIDER` | Set to `s3vectors` | Required | | `S3VECTORS_BUCKET_NAME` | S3 Vectors bucket name | Required | | `S3VECTORS_REGION` | AWS region for S3 Vectors | `us-east-1` | | `AWS_REGION` | Fallback region | `us-east-1` | ## IAM Permissions The S3 Vectors provider requires specific IAM permissions. Note that S3 Vectors uses its own service namespace (`s3vectors:`), not the standard S3 namespace. ### Minimum Required Permissions ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3vectors:GetVectorBucket" ], "Resource": "arn:aws:s3vectors:REGION:ACCOUNT:vector-bucket/BUCKET_NAME" }, { "Effect": "Allow", "Action": [ "s3vectors:CreateIndex", "s3vectors:GetIndex", "s3vectors:ListIndexes", "s3vectors:PutVectors", "s3vectors:QueryVectors", "s3vectors:ListVectors", "s3vectors:DeleteVectors", "s3vectors:GetVectors" ], "Resource": "arn:aws:s3vectors:REGION:ACCOUNT:vector-bucket/BUCKET_NAME/index/*" } ] } ``` ### SAM Template Example ```yaml Resources: S3VectorsBucket: Type: AWS::S3Vectors::VectorBucket Properties: VectorBucketName: !Sub '${AWS::StackName}-vectors' StacheFunction: Type: AWS::Serverless::Function Properties: Environment: Variables: VECTORDB_PROVIDER: s3vectors S3VECTORS_BUCKET_NAME: !GetAtt S3VectorsBucket.VectorBucketName Policies: - Version: '2012-10-17' Statement: - Effect: Allow Action: - s3vectors:GetVectorBucket Resource: - !GetAtt S3VectorsBucket.VectorBucketArn - Effect: Allow Action: - s3vectors:CreateIndex - s3vectors:GetIndex - s3vectors:ListIndexes - s3vectors:PutVectors - s3vectors:QueryVectors - s3vectors:ListVectors - s3vectors:DeleteVectors - s3vectors:GetVectors Resource: - !Sub '${S3VectorsBucket.VectorBucketArn}/index/*' ``` ### Terraform Example ```hcl data "aws_iam_policy_document" "s3vectors" { statement { effect = "Allow" actions = [ "s3vectors:GetVectorBucket" ] resources = [ "arn:aws:s3vectors:${var.region}:${data.aws_caller_identity.current.account_id}:vector-bucket/${var.bucket_name}" ] } statement { effect = "Allow" actions = [ "s3vectors:CreateIndex", "s3vectors:GetIndex", "s3vectors:ListIndexes", "s3vectors:PutVectors", "s3vectors:QueryVectors", "s3vectors:ListVectors", "s3vectors:DeleteVectors", "s3vectors:GetVectors" ] resources = [ "arn:aws:s3vectors:${var.region}:${data.aws_caller_identity.current.account_id}:vector-bucket/${var.bucket_name}/index/*" ] } } ``` ## Important Notes ### Bucket Name vs ARN The provider uses the **bucket name** (not the ARN) for API calls, but IAM policies require the full ARN format: - **Environment variable**: `S3VECTORS_BUCKET_NAME=my-vectors-bucket` (just the name) - **IAM Resource**: `arn:aws:s3vectors:us-east-1:123456789012:vector-bucket/my-vectors-bucket` (full ARN) S3 Vectors bucket names are **globally unique** across AWS, so include your account ID or a unique prefix: ```yaml # SAM template example VectorBucketName: !Sub '${AWS::StackName}-vectors-${AWS::AccountId}' ``` ### Metadata Limits S3 Vectors has metadata size limits: - **Filterable metadata**: 2KB limit (used in query filters) - **Non-filterable metadata**: Part of 40KB total (returned but can't filter) When creating indexes, specify large fields like `text` as non-filterable: ```bash aws s3vectors create-index \ --vector-bucket-name my-bucket \ --index-name my-index \ --dimension 1024 \ --distance-metric cosine \ --metadata-configuration 'nonFilterableMetadataKeys=["text"]' ``` ### list_vectors Limitation The `list_vectors` API does NOT support metadata filtering - only `query_vectors` supports filtering. For operations that need to filter without a query vector, the provider lists all vectors and filters client-side. ## Requirements - Python >= 3.10 - stache-ai >= 0.1.0 - boto3 >= 1.34.0
text/markdown
Stache Contributors
null
null
null
MIT
stache, rag, ai, s3vectors
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12" ]
[]
null
null
>=3.10
[]
[]
[]
[ "stache-ai<1.0.0,>=0.1.0", "boto3>=1.34.0", "pytest>=7.0.0; extra == \"dev\"", "pytest-asyncio>=0.21.0; extra == \"dev\"", "pytest-cov>=4.0.0; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/stache-ai/stache-ai", "Repository, https://github.com/stache-ai/stache-ai" ]
twine/5.0.0 CPython/3.12.3
2026-01-16T04:21:38.251179
stache_ai_s3vectors-0.1.1-py3-none-any.whl
10,051
1f/68/06f5805f2538bf78127d6ef92bfca3479f2050ccfd56b312838b8a6c14ad/stache_ai_s3vectors-0.1.1-py3-none-any.whl
py3
bdist_wheel
null
false
1ff4e3b6bcdb51d4a475c7603eab733e
e966ef5bad937d8983029b9b0358c3f635283269f36afbe02269b193287c26e2
1f6806f5805f2538bf78127d6ef92bfca3479f2050ccfd56b312838b8a6c14ad
null
[]
2.1
stache-ai-s3vectors
0.1.1
S3Vectors provider for Stache AI
# stache-ai-s3vectors S3 Vectors provider for [Stache AI](https://github.com/stache-ai/stache-ai) - serverless vector database using Amazon S3 Vectors. ## Installation ```bash pip install stache-ai-s3vectors ``` ## Usage Install the package and configure the provider in your settings: ```python from stache_ai.config import Settings settings = Settings( vectordb_provider="s3vectors", s3vectors_bucket_name="my-vector-bucket", # Required s3vectors_region="us-east-1", # Optional, defaults to AWS_REGION ) ``` The provider will be automatically discovered via entry points. ### Environment Variables | Variable | Description | Default | |----------|-------------|---------| | `VECTORDB_PROVIDER` | Set to `s3vectors` | Required | | `S3VECTORS_BUCKET_NAME` | S3 Vectors bucket name | Required | | `S3VECTORS_REGION` | AWS region for S3 Vectors | `us-east-1` | | `AWS_REGION` | Fallback region | `us-east-1` | ## IAM Permissions The S3 Vectors provider requires specific IAM permissions. Note that S3 Vectors uses its own service namespace (`s3vectors:`), not the standard S3 namespace. ### Minimum Required Permissions ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3vectors:GetVectorBucket" ], "Resource": "arn:aws:s3vectors:REGION:ACCOUNT:vector-bucket/BUCKET_NAME" }, { "Effect": "Allow", "Action": [ "s3vectors:CreateIndex", "s3vectors:GetIndex", "s3vectors:ListIndexes", "s3vectors:PutVectors", "s3vectors:QueryVectors", "s3vectors:ListVectors", "s3vectors:DeleteVectors", "s3vectors:GetVectors" ], "Resource": "arn:aws:s3vectors:REGION:ACCOUNT:vector-bucket/BUCKET_NAME/index/*" } ] } ``` ### SAM Template Example ```yaml Resources: S3VectorsBucket: Type: AWS::S3Vectors::VectorBucket Properties: VectorBucketName: !Sub '${AWS::StackName}-vectors' StacheFunction: Type: AWS::Serverless::Function Properties: Environment: Variables: VECTORDB_PROVIDER: s3vectors S3VECTORS_BUCKET_NAME: !GetAtt S3VectorsBucket.VectorBucketName Policies: - Version: '2012-10-17' Statement: - Effect: Allow Action: - s3vectors:GetVectorBucket Resource: - !GetAtt S3VectorsBucket.VectorBucketArn - Effect: Allow Action: - s3vectors:CreateIndex - s3vectors:GetIndex - s3vectors:ListIndexes - s3vectors:PutVectors - s3vectors:QueryVectors - s3vectors:ListVectors - s3vectors:DeleteVectors - s3vectors:GetVectors Resource: - !Sub '${S3VectorsBucket.VectorBucketArn}/index/*' ``` ### Terraform Example ```hcl data "aws_iam_policy_document" "s3vectors" { statement { effect = "Allow" actions = [ "s3vectors:GetVectorBucket" ] resources = [ "arn:aws:s3vectors:${var.region}:${data.aws_caller_identity.current.account_id}:vector-bucket/${var.bucket_name}" ] } statement { effect = "Allow" actions = [ "s3vectors:CreateIndex", "s3vectors:GetIndex", "s3vectors:ListIndexes", "s3vectors:PutVectors", "s3vectors:QueryVectors", "s3vectors:ListVectors", "s3vectors:DeleteVectors", "s3vectors:GetVectors" ] resources = [ "arn:aws:s3vectors:${var.region}:${data.aws_caller_identity.current.account_id}:vector-bucket/${var.bucket_name}/index/*" ] } } ``` ## Important Notes ### Bucket Name vs ARN The provider uses the **bucket name** (not the ARN) for API calls, but IAM policies require the full ARN format: - **Environment variable**: `S3VECTORS_BUCKET_NAME=my-vectors-bucket` (just the name) - **IAM Resource**: `arn:aws:s3vectors:us-east-1:123456789012:vector-bucket/my-vectors-bucket` (full ARN) S3 Vectors bucket names are **globally unique** across AWS, so include your account ID or a unique prefix: ```yaml # SAM template example VectorBucketName: !Sub '${AWS::StackName}-vectors-${AWS::AccountId}' ``` ### Metadata Limits S3 Vectors has metadata size limits: - **Filterable metadata**: 2KB limit (used in query filters) - **Non-filterable metadata**: Part of 40KB total (returned but can't filter) When creating indexes, specify large fields like `text` as non-filterable: ```bash aws s3vectors create-index \ --vector-bucket-name my-bucket \ --index-name my-index \ --dimension 1024 \ --distance-metric cosine \ --metadata-configuration 'nonFilterableMetadataKeys=["text"]' ``` ### list_vectors Limitation The `list_vectors` API does NOT support metadata filtering - only `query_vectors` supports filtering. For operations that need to filter without a query vector, the provider lists all vectors and filters client-side. ## Requirements - Python >= 3.10 - stache-ai >= 0.1.0 - boto3 >= 1.34.0
text/markdown
Stache Contributors
null
null
null
MIT
stache, rag, ai, s3vectors
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12" ]
[]
null
null
>=3.10
[]
[]
[]
[ "stache-ai<1.0.0,>=0.1.0", "boto3>=1.34.0", "pytest>=7.0.0; extra == \"dev\"", "pytest-asyncio>=0.21.0; extra == \"dev\"", "pytest-cov>=4.0.0; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/stache-ai/stache-ai", "Repository, https://github.com/stache-ai/stache-ai" ]
twine/5.0.0 CPython/3.12.3
2026-01-16T04:21:39.292007
stache_ai_s3vectors-0.1.1.tar.gz
20,964
00/3c/5eb0b8a5ac7fdb926ea1fd95e6346a51f473183d813b5e7c8738e92054db/stache_ai_s3vectors-0.1.1.tar.gz
source
sdist
null
false
94bfee502545b8103ec992a751c861cd
ba2ddf30092f561b19edfc0f23689fcf3b5bb793a69b03ebaa4918bf08714532
003c5eb0b8a5ac7fdb926ea1fd95e6346a51f473183d813b5e7c8738e92054db
null
[]
2.4
oligopool
2026.1.15
Oligopool Calculator - Automated design and analysis of oligopool libraries
<h1 align="center"> <a href="https://github.com/ayaanhossain/oligopool/"> <img src="https://raw.githubusercontent.com/ayaanhossain/repfmt/main/oligopool/img/logo.svg" alt="Oligopool Calculator" width="460" class="center"/> </a> </h1> <h4><p align="center">Version: 2026.01.15</p></h4> <p align="center"> <a style="text-decoration: none" href="#Installation">Installation</a> • <a style="text-decoration: none" href="#Getting-Started">Getting Started</a> • <a style="text-decoration: none" href="#Command-Line-Interface-CLI">CLI</a> • <a style="text-decoration: none" href="#Citation">Citation</a> • <a style="text-decoration: none" href="#License">License</a> </p> `Oligopool Calculator` is a suite of algorithms for automated design and analysis of [oligopool libraries](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9300125/). It enables the scalable design of universal primer sets, error-correctable barcodes, the splitting of long constructs into multiple oligos, and the rapid packing and counting of barcoded reads -- all on a regular 8-core desktop computer. We have used `Oligopool Calculator` in multiple projects to build libraries of tens of thousands of promoters (see [here](https://www.nature.com/articles/s41467-022-32829-5) and [here](https://www.nature.com/articles/s41587-020-0584-2)), ribozymes, and mRNA stability elements (see [here](https://www.nature.com/articles/s41467-024-54059-7)), illustrating the use of a flexible grammar to add multiple barcodes, cut sites, avoid excluded sequences, and optimize experimental constraints. These libraries were later characterized using highly efficient barcode counting provided by `Oligopool Calculator`. To learn more, please check out [our paper in ACS Synthetic Biology](https://pubs.acs.org/doi/10.1021/acssynbio.4c00661). `Oligopool Calculator` facilitates the creative design and application of massively parallel reporter assays by automating and simplifying the whole process. It has been benchmarked on simulated libraries containing millions of defined variants and to analyze billions of reads. <h1 align="center"> <a href="https://github.com/ayaanhossain/oligopool/"> <img src="https://raw.githubusercontent.com/ayaanhossain/repfmt/refs/heads/main/oligopool/img/workflow.svg" alt="Oligopool Calculator Workflow" width="3840" class="center"/> </a> </h1> **Design and analysis of oligopool variants using `Oligopool Calculator`.** **(a)** In `Design Mode`, `Oligopool Calculator` can be used to generate optimized `barcode`s, `primer`s, `spacer`s, `motif`s and `split` longer oligos into shorter `pad`ded fragments for downstream synthesis and assembly. **(b)** Once the library is assembled and cloned, barcoded amplicon sequencing data can be processed via `Analysis Mode` for characterization. `Analysis Mode` proceeds by first `index`ing one or more sets of barcodes, `pack`ing the reads, and then producing count matrices either using `acount` (association counting) or `xcount` (combinatorial counting). ## Installation `Oligopool Calculator` is a `Python3.10+`-exclusive library. On `Linux`, `MacOS` and `Windows Subsystem for Linux` you can install `Oligopool Calculator` from [PyPI](https://pypi.org/project/oligopool/), where it is published as the `oligopool` package ```bash $ pip install --upgrade oligopool # Installs and/or upgrades oligopool ``` This also installs the command line tools: `oligopool` and `op`. or install it directly from GitHub. ```bash $ pip install git+https://github.com/ayaanhossain/oligopool.git ``` Both approaches should install all dependencies automatically. > **Note** This GitHub version will always be updated with all recent fixes. The PyPI version should be more stable. If you are on `Windows` or simply prefer to, `Oligopool Calculator` can also be used via `docker` (please see [the notes](https://github.com/ayaanhossain/oligopool/blob/master/docker-notes.md)). **Verifying Installation** Successful installation will look like this. ```python $ python Python 3.10.9 | packaged by conda-forge | (main, Feb 2 2023, 20:20:04) [GCC 11.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import oligopool as op >>> op.__version__ '2026.01.15' >>> ``` ## Getting Started `Oligopool Calculator` is carefully designed, easy to use, and stupid fast. You can import the library and use its various functions either in a script or interactively inside a `jupyter` environment. Use `help(...)` to read the docs as necessary and follow along. There are examples of a [design parser](https://github.com/ayaanhossain/oligopool/blob/master/examples/design-parser/design_parser.py) and an [analysis pipeline](https://github.com/ayaanhossain/oligopool/blob/master/examples/analysis-pipeline/analysis_pipeline.py) inside the [`examples`](https://github.com/ayaanhossain/oligopool/tree/master/examples) directory. A notebook demonstrating [`Oligopool Calculator` in action](https://github.com/ayaanhossain/oligopool/blob/master/examples/OligopoolCalculatorInAction.ipynb) is provided there as well. ```python $ python Python 3.12.6 | packaged by conda-forge | (main, Sep 30 2024, 18:08:52) [GCC 13.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> >>> import oligopool as op >>> help(op) ... oligopool v2026.01.15 by ah Automated design and analysis of oligopool libraries. The various modules in Oligopool Calculator can be used interactively in a jupyter notebook, or be used to define scripts for design and analysis pipelines on the cloud. Oligopool Calculator offers two modes of operation - Design Mode for designing oligopool libraries, and - Analysis Mode for analyzing oligopool datasets. Design Mode workflow 1. Initialize a pandas DataFrame with core library elements. a. The DataFrame must contain a unique 'ID' column serving as primary key. b. All other columns in the DataFrame must be DNA sequences. 2. Define any optional background sequences via the background module. 3. Add necessary oligopool elements with constraints via element modules. 4. Optionally, split long oligos and pad them via assembly modules. 5. Perform additional maneuvers and finalize library via auxiliary modules. Background module available - background Element modules available - primer - barcode - motif - spacer Assembly modules available - split - pad Auxiliary modules available - merge - revcomp - lenstat - final Design Mode example sketch >>> import pandas as pd >>> import oligopool as op >>> >>> # Read initial library >>> init_df = pd.read_csv('initial_library.csv') >>> >>> # Add oligo elements one by one >>> primer_df, stats = op.primer(input_data=init_df, ...) >>> barcode_df, stats = op.barcode(input_data=primer_df, ...) ... >>> # Check length statistics as needed >>> length_stats = op.lenstat(input_data=further_along_df) ... >>> >>> # Split and pad longer oligos if needed >>> split_df, stats = op.split(input_data=even_further_along_df, ...) >>> first_pad_df, stats = op.pad(input_data=split_df, ...) >>> second_pad_df, stats = op.pad(input_data=split_df, ...) ... >>> >>> # Finalize the library >>> final_df, stats = op.final(input_data=ready_to_go_df, ...) ... Analysis Mode workflow 1. Index one or more CSVs containing barcode (and associate) data. 2. Pack all NGS FastQ files, optionally merging them if required. 3. Use acount for association counting of variants and barcodes. 4. If multiple barcode combinations are to be counted use xcount. 5. Combine count DataFrames and perform stats and ML as necessary. Indexing module available - index Packing module available - pack Counting modules available - acount - xcount Analysis Mode example sketch >>> import pandas as pd >>> import oligopool as op >>> >>> # Read annotated library >>> bc1_df = pd.read_csv('barcode_1.csv') >>> bc2_df = pd.read_csv('barcode_2.csv') >>> av1_df = pd.read_csv('associate_1.csv') ... >>> >>> # Index barcodes and any associates >>> bc1_index_stats = op.index(barcode_data=bc1_df, barcode_column='BC1', ...) >>> bc2_index_stats = op.index(barcode_data=bc2_df, barcode_column='BC2', ...) ... >>> >>> # Pack experiment FastQ files >>> sam1_pack_stats = op.pack(r1_file='sample_1_R1.fq.gz', ...) >>> sam2_pack_stats = op.pack(r1_file='sample_2_R1.fq.gz', ...) ... >>> >>> # Compute and write barcode combination count matrix >>> xcount_df, stats = op.xcount(index_files=['bc1_index', 'bc2_index'], ... pack_file='sample_1_pack', ...) ... You can learn more about each module using help. >>> import oligopool as op >>> help(op) >>> help(op.primer) >>> help(op.barcode) ... >>> help(op.xcount) For advanced uses, the following classes are also available. - vectorDB - Scry ... ``` ### Command Line Interface (CLI) The `oligopool` package installs a CLI with two equivalent entry points: `oligopool` and `op`. ```bash $ op $ op manual $ op manual topics $ oligopool manual barcode ``` At a glance, running `op` or `oligopool` with no arguments prints available commands. ```bash $ op oligopool v2026.01.15 by ah usage: oligopool COMMAND --argument=<value> ... COMMANDS Available: manual show module documentation background build background k-mer database barcode design constrained barcodes primer design constrained primers motif design or add motifs spacer design or insert spacers split split oligos into fragments pad pad split oligos with primers merge merge elements into one column revcomp reverse complement elements lenstat compute length statistics final finalize library index index barcodes and associates pack pack fastq reads acount association counting xcount combinatorial counting Note: Run "oligopool COMMAND" to see command-specific options. ``` Most CLI subcommands write outputs to disk, so `--output-file` is required for commands that produce output DataFrames (for example: `barcode`, `primer`, `motif`, `spacer`, `split`, `pad`, `merge`, `revcomp`, `final`). Example: ```bash $ op barcode \ --input-data initial_library.csv \ --oligo-length-limit 200 \ --barcode-length 20 \ --minimum-hamming-distance 3 \ --maximum-repeat-length 6 \ --barcode-column Barcode \ --output-file library_with_barcodes.csv ``` ## Citation If you use `Oligopool Calculator` or libraries designed or analyzed using the tool in your research publication, please cite our paper. ``` Hossain A, Cetnar DP, LaFleur TL, McLellan JR, Salis HM. Automated Design of Oligopools and Rapid Analysis of Massively Parallel Barcoded Measurements. ACS Synth Biol. 2024;13(12):4218-4232. doi:10.1021/acssynbio.4c00661 ``` You can read the complete article online at [ACS Synthetic Biology](https://doi.org/10.1021/acssynbio.4c00661). ## License `Oligopool Calculator` (c) 2026 Ayaan Hossain. `Oligopool Calculator` is an **open-source software** under [GPL-3.0](https://opensource.org/license/gpl-3-0) License. See [LICENSE](https://github.com/ayaanhossain/oligopool/blob/master/LICENSE) file for more details.
text/markdown
Ayaan Hossain and Howard Salis
auh57@psu.edu, salis@psu.edu
null
null
null
synthetic computational biology nucleotide oligo pool calculator design analysis barcode primer spacer motif split pad assembly index pack scry classifier count acount xcount
[ "Development Status :: 4 - Beta", "Intended Audience :: Science/Research", "Topic :: Scientific/Engineering", "Topic :: Scientific/Engineering :: Bio-Informatics", "Topic :: Scientific/Engineering :: Chemistry", "License :: OSI Approved :: GNU General Public License v3 (GPLv3)", "Programming Language ::...
[]
https://github.com/ayaanhossain/oligopool
null
<4,>=3.10
[]
[]
[]
[ "biopython>=1.84", "primer3-py>=2.0.3", "msgpack>=1.1.0", "pyfastx>=2.1.0", "edlib>=1.3.9.post1", "parasail>=1.3.4", "nrpcalc>=1.7.0", "sharedb>=1.1.2", "numba>=0.60.0", "seaborn>=0.13.2", "multiprocess>=0.70.17" ]
[]
[]
[]
[ "Bug Reports, https://github.com/ayaanhossain/oligopool/issues", "Source, https://github.com/ayaanhossain/oligopool/tree/master/oligopool" ]
twine/6.2.0 CPython/3.13.7
2026-01-16T04:22:06.227119
oligopool-2026.1.15-py3-none-any.whl
184,787
25/1f/8703f62a492c5c62f5dd600ba3c317144525134bab01127fd3fe047f4832/oligopool-2026.1.15-py3-none-any.whl
py3
bdist_wheel
null
false
6ad3fa80cd1e5bde5bd9459d6774670a
84086058f6da9616327e61345aaffd625ece1c32b7b59412058d960e6560864d
251f8703f62a492c5c62f5dd600ba3c317144525134bab01127fd3fe047f4832
null
[ "LICENSE" ]
2.4
oligopool
2026.1.15
Oligopool Calculator - Automated design and analysis of oligopool libraries
<h1 align="center"> <a href="https://github.com/ayaanhossain/oligopool/"> <img src="https://raw.githubusercontent.com/ayaanhossain/repfmt/main/oligopool/img/logo.svg" alt="Oligopool Calculator" width="460" class="center"/> </a> </h1> <h4><p align="center">Version: 2026.01.15</p></h4> <p align="center"> <a style="text-decoration: none" href="#Installation">Installation</a> • <a style="text-decoration: none" href="#Getting-Started">Getting Started</a> • <a style="text-decoration: none" href="#Command-Line-Interface-CLI">CLI</a> • <a style="text-decoration: none" href="#Citation">Citation</a> • <a style="text-decoration: none" href="#License">License</a> </p> `Oligopool Calculator` is a suite of algorithms for automated design and analysis of [oligopool libraries](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9300125/). It enables the scalable design of universal primer sets, error-correctable barcodes, the splitting of long constructs into multiple oligos, and the rapid packing and counting of barcoded reads -- all on a regular 8-core desktop computer. We have used `Oligopool Calculator` in multiple projects to build libraries of tens of thousands of promoters (see [here](https://www.nature.com/articles/s41467-022-32829-5) and [here](https://www.nature.com/articles/s41587-020-0584-2)), ribozymes, and mRNA stability elements (see [here](https://www.nature.com/articles/s41467-024-54059-7)), illustrating the use of a flexible grammar to add multiple barcodes, cut sites, avoid excluded sequences, and optimize experimental constraints. These libraries were later characterized using highly efficient barcode counting provided by `Oligopool Calculator`. To learn more, please check out [our paper in ACS Synthetic Biology](https://pubs.acs.org/doi/10.1021/acssynbio.4c00661). `Oligopool Calculator` facilitates the creative design and application of massively parallel reporter assays by automating and simplifying the whole process. It has been benchmarked on simulated libraries containing millions of defined variants and to analyze billions of reads. <h1 align="center"> <a href="https://github.com/ayaanhossain/oligopool/"> <img src="https://raw.githubusercontent.com/ayaanhossain/repfmt/refs/heads/main/oligopool/img/workflow.svg" alt="Oligopool Calculator Workflow" width="3840" class="center"/> </a> </h1> **Design and analysis of oligopool variants using `Oligopool Calculator`.** **(a)** In `Design Mode`, `Oligopool Calculator` can be used to generate optimized `barcode`s, `primer`s, `spacer`s, `motif`s and `split` longer oligos into shorter `pad`ded fragments for downstream synthesis and assembly. **(b)** Once the library is assembled and cloned, barcoded amplicon sequencing data can be processed via `Analysis Mode` for characterization. `Analysis Mode` proceeds by first `index`ing one or more sets of barcodes, `pack`ing the reads, and then producing count matrices either using `acount` (association counting) or `xcount` (combinatorial counting). ## Installation `Oligopool Calculator` is a `Python3.10+`-exclusive library. On `Linux`, `MacOS` and `Windows Subsystem for Linux` you can install `Oligopool Calculator` from [PyPI](https://pypi.org/project/oligopool/), where it is published as the `oligopool` package ```bash $ pip install --upgrade oligopool # Installs and/or upgrades oligopool ``` This also installs the command line tools: `oligopool` and `op`. or install it directly from GitHub. ```bash $ pip install git+https://github.com/ayaanhossain/oligopool.git ``` Both approaches should install all dependencies automatically. > **Note** This GitHub version will always be updated with all recent fixes. The PyPI version should be more stable. If you are on `Windows` or simply prefer to, `Oligopool Calculator` can also be used via `docker` (please see [the notes](https://github.com/ayaanhossain/oligopool/blob/master/docker-notes.md)). **Verifying Installation** Successful installation will look like this. ```python $ python Python 3.10.9 | packaged by conda-forge | (main, Feb 2 2023, 20:20:04) [GCC 11.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import oligopool as op >>> op.__version__ '2026.01.15' >>> ``` ## Getting Started `Oligopool Calculator` is carefully designed, easy to use, and stupid fast. You can import the library and use its various functions either in a script or interactively inside a `jupyter` environment. Use `help(...)` to read the docs as necessary and follow along. There are examples of a [design parser](https://github.com/ayaanhossain/oligopool/blob/master/examples/design-parser/design_parser.py) and an [analysis pipeline](https://github.com/ayaanhossain/oligopool/blob/master/examples/analysis-pipeline/analysis_pipeline.py) inside the [`examples`](https://github.com/ayaanhossain/oligopool/tree/master/examples) directory. A notebook demonstrating [`Oligopool Calculator` in action](https://github.com/ayaanhossain/oligopool/blob/master/examples/OligopoolCalculatorInAction.ipynb) is provided there as well. ```python $ python Python 3.12.6 | packaged by conda-forge | (main, Sep 30 2024, 18:08:52) [GCC 13.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> >>> import oligopool as op >>> help(op) ... oligopool v2026.01.15 by ah Automated design and analysis of oligopool libraries. The various modules in Oligopool Calculator can be used interactively in a jupyter notebook, or be used to define scripts for design and analysis pipelines on the cloud. Oligopool Calculator offers two modes of operation - Design Mode for designing oligopool libraries, and - Analysis Mode for analyzing oligopool datasets. Design Mode workflow 1. Initialize a pandas DataFrame with core library elements. a. The DataFrame must contain a unique 'ID' column serving as primary key. b. All other columns in the DataFrame must be DNA sequences. 2. Define any optional background sequences via the background module. 3. Add necessary oligopool elements with constraints via element modules. 4. Optionally, split long oligos and pad them via assembly modules. 5. Perform additional maneuvers and finalize library via auxiliary modules. Background module available - background Element modules available - primer - barcode - motif - spacer Assembly modules available - split - pad Auxiliary modules available - merge - revcomp - lenstat - final Design Mode example sketch >>> import pandas as pd >>> import oligopool as op >>> >>> # Read initial library >>> init_df = pd.read_csv('initial_library.csv') >>> >>> # Add oligo elements one by one >>> primer_df, stats = op.primer(input_data=init_df, ...) >>> barcode_df, stats = op.barcode(input_data=primer_df, ...) ... >>> # Check length statistics as needed >>> length_stats = op.lenstat(input_data=further_along_df) ... >>> >>> # Split and pad longer oligos if needed >>> split_df, stats = op.split(input_data=even_further_along_df, ...) >>> first_pad_df, stats = op.pad(input_data=split_df, ...) >>> second_pad_df, stats = op.pad(input_data=split_df, ...) ... >>> >>> # Finalize the library >>> final_df, stats = op.final(input_data=ready_to_go_df, ...) ... Analysis Mode workflow 1. Index one or more CSVs containing barcode (and associate) data. 2. Pack all NGS FastQ files, optionally merging them if required. 3. Use acount for association counting of variants and barcodes. 4. If multiple barcode combinations are to be counted use xcount. 5. Combine count DataFrames and perform stats and ML as necessary. Indexing module available - index Packing module available - pack Counting modules available - acount - xcount Analysis Mode example sketch >>> import pandas as pd >>> import oligopool as op >>> >>> # Read annotated library >>> bc1_df = pd.read_csv('barcode_1.csv') >>> bc2_df = pd.read_csv('barcode_2.csv') >>> av1_df = pd.read_csv('associate_1.csv') ... >>> >>> # Index barcodes and any associates >>> bc1_index_stats = op.index(barcode_data=bc1_df, barcode_column='BC1', ...) >>> bc2_index_stats = op.index(barcode_data=bc2_df, barcode_column='BC2', ...) ... >>> >>> # Pack experiment FastQ files >>> sam1_pack_stats = op.pack(r1_file='sample_1_R1.fq.gz', ...) >>> sam2_pack_stats = op.pack(r1_file='sample_2_R1.fq.gz', ...) ... >>> >>> # Compute and write barcode combination count matrix >>> xcount_df, stats = op.xcount(index_files=['bc1_index', 'bc2_index'], ... pack_file='sample_1_pack', ...) ... You can learn more about each module using help. >>> import oligopool as op >>> help(op) >>> help(op.primer) >>> help(op.barcode) ... >>> help(op.xcount) For advanced uses, the following classes are also available. - vectorDB - Scry ... ``` ### Command Line Interface (CLI) The `oligopool` package installs a CLI with two equivalent entry points: `oligopool` and `op`. ```bash $ op $ op manual $ op manual topics $ oligopool manual barcode ``` At a glance, running `op` or `oligopool` with no arguments prints available commands. ```bash $ op oligopool v2026.01.15 by ah usage: oligopool COMMAND --argument=<value> ... COMMANDS Available: manual show module documentation background build background k-mer database barcode design constrained barcodes primer design constrained primers motif design or add motifs spacer design or insert spacers split split oligos into fragments pad pad split oligos with primers merge merge elements into one column revcomp reverse complement elements lenstat compute length statistics final finalize library index index barcodes and associates pack pack fastq reads acount association counting xcount combinatorial counting Note: Run "oligopool COMMAND" to see command-specific options. ``` Most CLI subcommands write outputs to disk, so `--output-file` is required for commands that produce output DataFrames (for example: `barcode`, `primer`, `motif`, `spacer`, `split`, `pad`, `merge`, `revcomp`, `final`). Example: ```bash $ op barcode \ --input-data initial_library.csv \ --oligo-length-limit 200 \ --barcode-length 20 \ --minimum-hamming-distance 3 \ --maximum-repeat-length 6 \ --barcode-column Barcode \ --output-file library_with_barcodes.csv ``` ## Citation If you use `Oligopool Calculator` or libraries designed or analyzed using the tool in your research publication, please cite our paper. ``` Hossain A, Cetnar DP, LaFleur TL, McLellan JR, Salis HM. Automated Design of Oligopools and Rapid Analysis of Massively Parallel Barcoded Measurements. ACS Synth Biol. 2024;13(12):4218-4232. doi:10.1021/acssynbio.4c00661 ``` You can read the complete article online at [ACS Synthetic Biology](https://doi.org/10.1021/acssynbio.4c00661). ## License `Oligopool Calculator` (c) 2026 Ayaan Hossain. `Oligopool Calculator` is an **open-source software** under [GPL-3.0](https://opensource.org/license/gpl-3-0) License. See [LICENSE](https://github.com/ayaanhossain/oligopool/blob/master/LICENSE) file for more details.
text/markdown
Ayaan Hossain and Howard Salis
auh57@psu.edu, salis@psu.edu
null
null
null
synthetic computational biology nucleotide oligo pool calculator design analysis barcode primer spacer motif split pad assembly index pack scry classifier count acount xcount
[ "Development Status :: 4 - Beta", "Intended Audience :: Science/Research", "Topic :: Scientific/Engineering", "Topic :: Scientific/Engineering :: Bio-Informatics", "Topic :: Scientific/Engineering :: Chemistry", "License :: OSI Approved :: GNU General Public License v3 (GPLv3)", "Programming Language ::...
[]
https://github.com/ayaanhossain/oligopool
null
<4,>=3.10
[]
[]
[]
[ "biopython>=1.84", "primer3-py>=2.0.3", "msgpack>=1.1.0", "pyfastx>=2.1.0", "edlib>=1.3.9.post1", "parasail>=1.3.4", "nrpcalc>=1.7.0", "sharedb>=1.1.2", "numba>=0.60.0", "seaborn>=0.13.2", "multiprocess>=0.70.17" ]
[]
[]
[]
[ "Bug Reports, https://github.com/ayaanhossain/oligopool/issues", "Source, https://github.com/ayaanhossain/oligopool/tree/master/oligopool" ]
twine/6.2.0 CPython/3.13.7
2026-01-16T04:22:07.330972
oligopool-2026.1.15.tar.gz
161,915
24/a2/42550959e9ac1fe35c26dc14d06e280ce5dc65380878e8d5f30121a10680/oligopool-2026.1.15.tar.gz
source
sdist
null
false
2b11ad08520e1f5cd20d9255bfb06505
d039148ed213e5fde0a0bad09d1c19e982c2e318caaacbcd034dd1a60fa7d5ec
24a242550959e9ac1fe35c26dc14d06e280ce5dc65380878e8d5f30121a10680
null
[ "LICENSE" ]
2.1
odoo-addon-fs-storage
16.0.1.3.5.3
Implement the concept of Storage with amazon S3, sftp...
.. image:: https://odoo-community.org/readme-banner-image :target: https://odoo-community.org/get-involved?utm_source=readme :alt: Odoo Community Association ========================== Filesystem Storage Backend ========================== .. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !! This file is generated by oca-gen-addon-readme !! !! changes will be overwritten. !! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !! source digest: sha256:b9f95387306ce78e4543bc0b90f958fa188ba244dd6df41af486078d2d358fdf !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! .. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png :target: https://odoo-community.org/page/development-status :alt: Beta .. |badge2| image:: https://img.shields.io/badge/license-LGPL--3-blue.png :target: http://www.gnu.org/licenses/lgpl-3.0-standalone.html :alt: License: LGPL-3 .. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fstorage-lightgray.png?logo=github :target: https://github.com/OCA/storage/tree/16.0/fs_storage :alt: OCA/storage .. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png :target: https://translation.odoo-community.org/projects/storage-16-0/storage-16-0-fs_storage :alt: Translate me on Weblate .. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png :target: https://runboat.odoo-community.org/builds?repo=OCA/storage&target_branch=16.0 :alt: Try me on Runboat |badge1| |badge2| |badge3| |badge4| |badge5| This addon is a technical addon that allows you to define filesystem like storage for your data. It's used by other addons to store their data in a transparent way into different kind of storages. Through the fs.storage record, you get access to an object that implements the `fsspec.spec.AbstractFileSystem <https://filesystem-spec.readthedocs.io/en/ latest/api.html#fsspec.spec.AbstractFileSystem>`_ interface and therefore give you an unified interface to access your data whatever the storage protocol you decide to use. The list of supported protocols depends on the installed fsspec implementations. By default, the addon will install the following protocols: * LocalFileSystem * MemoryFileSystem * ZipFileSystem * TarFileSystem * FTPFileSystem * CachingFileSystem * WholeFileSystem * SimplCacheFileSystem * ReferenceFileSystem * GenericFileSystem * DirFileSystem * DatabricksFileSystem * GitHubFileSystem * JupiterFileSystem * OdooFileSystem The OdooFileSystem is the one that allows you to store your data into a directory mounted into your Odoo's storage directory. This is the default FS Storage when creating a new fs.storage record. Others protocols are available through the installation of additional python packages: * DropboxDriveFileSystem -> `pip install fsspec[dropbox]` * HTTPFileSystem -> `pip install fsspec[http]` * HTTPSFileSystem -> `pip install fsspec[http]` * GCSFileSystem -> `pip install fsspec[gcs]` * GSFileSystem -> `pip install fsspec[gs]` * GoogleDriveFileSystem -> `pip install gdrivefs` * SFTPFileSystem -> `pip install fsspec[sftp]` * HaddoopFileSystem -> `pip install fsspec[hdfs]` * S3FileSystem -> `pip install fsspec[s3]` * WandbFS -> `pip install wandbfs` * OCIFileSystem -> `pip install fsspec[oci]` * AsyncLocalFileSystem -> `pip install 'morefs[asynclocalfs]` * AzureDatalakeFileSystem -> `pip install fsspec[adl]` * AzureBlobFileSystem -> `pip install fsspec[abfs]` * DaskWorkerFileSystem -> `pip install fsspec[dask]` * GitFileSystem -> `pip install fsspec[git]` * SMBFileSystem -> `pip install fsspec[smb]` * LibArchiveFileSystem -> `pip install fsspec[libarchive]` * OSSFileSystem -> `pip install ossfs` * WebdavFileSystem -> `pip install webdav4` * DVCFileSystem -> `pip install dvc` * XRootDFileSystem -> `pip install fsspec-xrootd` This list of supported protocols is not exhaustive or could change in the future depending on the fsspec releases. You can find more information about the supported protocols on the `fsspec documentation <https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem>`_. **Table of contents** .. contents:: :local: Usage ===== Configuration ~~~~~~~~~~~~~ When you create a new backend, you must specify the following: * The name of the backend. This is the name that will be used to identify the backend into Odoo * The code of the backend. This code will identify the backend into the store_fname field of the ir.attachment model. This code must be unique. It will be used as scheme. example of the store_fname field: ``odoofs://abs34Tg11``. * The protocol used by the backend. The protocol refers to the supported protocols of the fsspec python package. * A directory path. This is a root directory from which the filesystem will be mounted. This directory must exist. * The protocol options. These are the options that will be passed to the fsspec python package when creating the filesystem. These options depend on the protocol used and are described in the fsspec documentation. * Resolve env vars. This options resolves the protocol options values starting with $ from environment variables * Check Connection Method. If set, Odoo will always check the connection before using a storage and it will remove the fs connection from the cache if the check fails. * ``Create Marker file`` : create a hidden file on remote and then check it exists with Use it if you have write access to the remote and if it is not an issue to leave the marker file in the root directory. * ``List file`` : list all files from the root directory. You can use it if the directory path does not contain a big list of files (for performance reasons) Some protocols defined in the fsspec package are wrappers around other protocols. For example, the SimpleCacheFileSystem protocol is a wrapper around any local filesystem protocol. In such cases, you must specify into the protocol options the protocol to be wrapped and the options to be passed to the wrapped protocol. For example, if you want to create a backend that uses the SimpleCacheFileSystem protocol, after selecting the SimpleCacheFileSystem protocol, you must specify the protocol options as follows: .. code-block:: python { "directory_path": "/tmp/my_backend", "target_protocol": "odoofs", "target_options": {...}, } In this example, the SimpleCacheFileSystem protocol will be used as a wrapper around the odoofs protocol. Server Environment ~~~~~~~~~~~~~~~~~~ To ease the management of the filesystem storages configuration accross the different environments, the configuration of the filesystem storages can be defined in environment files or directly in the main configuration file. For example, the configuration of a filesystem storage with the code `fsprod` can be provided in the main configuration file as follows: .. code-block:: ini [fs_storage.fsprod] protocol=s3 options={"endpoint_url": "https://my_s3_server/", "key": "KEY", "secret": "SECRET"} directory_path=my_bucket To work, a `storage.backend` record must exist with the code `fsprod` into the database. In your configuration section, you can specify the value for the following fields: * `protocol` * `options` * `directory_path` Migration from storage_backend ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The fs_storage addon can be used to replace the storage_backend addon. (It has been designed to be a drop-in replacement for the storage_backend addon). To ease the migration, the `fs.storage` model defines the high-level methods available in the storage_backend model. These methods are: * `add` * `get` * `list_files` * `find_files` * `move_files` * `delete` These methods are wrappers around the methods of the `fsspec.AbstractFileSystem` class (see https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem). These methods are marked as deprecated and will be removed in a future version (V18) of the addon. You should use the methods of the `fsspec.AbstractFileSystem` class instead since they are more flexible and powerful. You can access the instance of the `fsspec.AbstractFileSystem` class using the `fs` property of a `fs.storage` record. Known issues / Roadmap ====================== * Transactions: fsspec comes with a transactional mechanism that once started, gathers all the files created during the transaction, and if the transaction is committed, moves them to their final locations. It would be useful to bridge this with the transactional mechanism of odoo. This would allow to ensure that all the files created during a transaction are either all moved to their final locations, or all deleted if the transaction is rolled back. This mechanism is only valid for files created during the transaction by a call to the `open` method of the file system. It is not valid for others operations, such as `rm`, `mv_file`, ... . Changelog ========= 16.0.1.2.0 (2024-02-06) ~~~~~~~~~~~~~~~~~~~~~~~ **Features** - Invalidate FS filesystem object cache when the connection fails, forcing a reconnection. (`#320 <https://github.com/OCA/storage/issues/320>`_) 16.0.1.1.0 (2023-12-22) ~~~~~~~~~~~~~~~~~~~~~~~ **Features** - Add parameter on storage backend to resolve protocol options values starting with $ from environment variables (`#303 <https://github.com/OCA/storage/issues/303>`_) 16.0.1.0.3 (2023-10-17) ~~~~~~~~~~~~~~~~~~~~~~~ **Bugfixes** - Fix access to technical models to be able to upload attachments for users with basic access (`#289 <https://github.com/OCA/storage/issues/289>`_) 16.0.1.0.2 (2023-10-09) ~~~~~~~~~~~~~~~~~~~~~~~ **Bugfixes** - Avoid config error when using the webdav protocol. The auth option is expected to be a tuple not a list. Since our config is loaded from a json file, we cannot use tuples. The fix converts the list to a tuple when the config is related to a webdav protocol and the auth option is into the confix. (`#285 <https://github.com/OCA/storage/issues/285>`_) Bug Tracker =========== Bugs are tracked on `GitHub Issues <https://github.com/OCA/storage/issues>`_. In case of trouble, please check there if your issue has already been reported. If you spotted it first, help us to smash it by providing a detailed and welcomed `feedback <https://github.com/OCA/storage/issues/new?body=module:%20fs_storage%0Aversion:%2016.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_. Do not contact contributors directly about support or help with technical issues. Credits ======= Authors ~~~~~~~ * ACSONE SA/NV Contributors ~~~~~~~~~~~~ * Laurent Mignon <laurent.mignon@acsone.eu> * Sébastien BEAU <sebastien.beau@akretion.com> Maintainers ~~~~~~~~~~~ This module is maintained by the OCA. .. image:: https://odoo-community.org/logo.png :alt: Odoo Community Association :target: https://odoo-community.org OCA, or the Odoo Community Association, is a nonprofit organization whose mission is to support the collaborative development of Odoo features and promote its widespread use. This module is part of the `OCA/storage <https://github.com/OCA/storage/tree/16.0/fs_storage>`_ project on GitHub. You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
null
ACSONE SA/NV, Odoo Community Association (OCA)
support@odoo-community.org
null
null
LGPL-3
null
[ "Programming Language :: Python", "Framework :: Odoo", "Framework :: Odoo :: 16.0", "License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)", "Development Status :: 4 - Beta" ]
[]
https://github.com/OCA/storage
null
>=3.10
[]
[]
[]
[ "fsspec>=2024.5.0", "odoo-addon-server-environment<16.1dev,>=16.0dev", "odoo<16.1dev,>=16.0a" ]
[]
[]
[]
[]
twine/6.2.0 CPython/3.12.3
2026-01-16T04:24:02.080575
odoo_addon_fs_storage-16.0.1.3.5.3-py3-none-any.whl
60,643
96/60/76554dfce1dd7f408fe268a838844f6fc88fea58429bed74b86502cff2de/odoo_addon_fs_storage-16.0.1.3.5.3-py3-none-any.whl
py3
bdist_wheel
null
false
4a5019087a616bb933d86812a7d2f67a
54955953c7e67a30b091abfd9ee9d5147f910b734a24f3e50d5428d9694b4da5
966076554dfce1dd7f408fe268a838844f6fc88fea58429bed74b86502cff2de
null
[]
2.1
odoo-addon-fs-storage
17.0.2.1.0.1
Implement the concept of Storage with amazon S3, sftp...
.. image:: https://odoo-community.org/readme-banner-image :target: https://odoo-community.org/get-involved?utm_source=readme :alt: Odoo Community Association ========================== Filesystem Storage Backend ========================== .. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !! This file is generated by oca-gen-addon-readme !! !! changes will be overwritten. !! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !! source digest: sha256:43050d60f14fb2179088cecafc9185490a52d9ebfc179af1c4caefce457e4d19 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! .. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png :target: https://odoo-community.org/page/development-status :alt: Beta .. |badge2| image:: https://img.shields.io/badge/license-LGPL--3-blue.png :target: http://www.gnu.org/licenses/lgpl-3.0-standalone.html :alt: License: LGPL-3 .. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fstorage-lightgray.png?logo=github :target: https://github.com/OCA/storage/tree/17.0/fs_storage :alt: OCA/storage .. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png :target: https://translation.odoo-community.org/projects/storage-17-0/storage-17-0-fs_storage :alt: Translate me on Weblate .. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png :target: https://runboat.odoo-community.org/builds?repo=OCA/storage&target_branch=17.0 :alt: Try me on Runboat |badge1| |badge2| |badge3| |badge4| |badge5| This addon is a technical addon that allows you to define filesystem like storage for your data. It's used by other addons to store their data in a transparent way into different kind of storages. Through the fs.storage record, you get access to an object that implements the `fsspec.spec.AbstractFileSystem <https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem>`__ interface and therefore give you an unified interface to access your data whatever the storage protocol you decide to use. The list of supported protocols depends on the installed fsspec implementations. By default, the addon will install the following protocols: - LocalFileSystem - MemoryFileSystem - ZipFileSystem - TarFileSystem - FTPFileSystem - CachingFileSystem - WholeFileSystem - SimplCacheFileSystem - ReferenceFileSystem - GenericFileSystem - DirFileSystem - DatabricksFileSystem - GitHubFileSystem - JupiterFileSystem - OdooFileSystem The OdooFileSystem is the one that allows you to store your data into a directory mounted into your Odoo's storage directory. This is the default FS Storage when creating a new fs.storage record. Others protocols are available through the installation of additional python packages: - DropboxDriveFileSystem -> pip install fsspec[dropbox] - HTTPFileSystem -> pip install fsspec[http] - HTTPSFileSystem -> pip install fsspec[http] - GCSFileSystem -> pip install fsspec[gcs] - GSFileSystem -> pip install fsspec[gs] - GoogleDriveFileSystem -> pip install gdrivefs - SFTPFileSystem -> pip install fsspec[sftp] - HaddoopFileSystem -> pip install fsspec[hdfs] - S3FileSystem -> pip install fsspec[s3] - WandbFS -> pip install wandbfs - OCIFileSystem -> pip install fsspec[oci] - AsyncLocalFileSystem -> pip install 'morefs[asynclocalfs] - AzureDatalakeFileSystem -> pip install fsspec[adl] - AzureBlobFileSystem -> pip install fsspec[abfs] - DaskWorkerFileSystem -> pip install fsspec[dask] - GitFileSystem -> pip install fsspec[git] - SMBFileSystem -> pip install fsspec[smb] - LibArchiveFileSystem -> pip install fsspec[libarchive] - OSSFileSystem -> pip install ossfs - WebdavFileSystem -> pip install webdav4 - DVCFileSystem -> pip install dvc - XRootDFileSystem -> pip install fsspec-xrootd This list of supported protocols is not exhaustive or could change in the future depending on the fsspec releases. You can find more information about the supported protocols on the `fsspec documentation <https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem>`__. **Table of contents** .. contents:: :local: Usage ===== Configuration ------------- When you create a new backend, you must specify the following: - The name of the backend. This is the name that will be used to identify the backend into Odoo - The code of the backend. This code will identify the backend into the store_fname field of the ir.attachment model. This code must be unique. It will be used as scheme. example of the store_fname field: ``odoofs://abs34Tg11``. - The protocol used by the backend. The protocol refers to the supported protocols of the fsspec python package. - A directory path. This is a root directory from which the filesystem will be mounted. This directory must exist. - The protocol options. These are the options that will be passed to the fsspec python package when creating the filesystem. These options depend on the protocol used and are described in the fsspec documentation. - Resolve env vars. This options resolves the protocol options values starting with $ from environment variables - Check Connection Method. If set, Odoo will always check the connection before using a storage and it will remove the fs connection from the cache if the check fails. - ``Create Marker file``: create a hidden file on remote and then check it exists with Use it if you have write access to the remote and if it is not an issue to leave the marker file in the root directory. - ``List file``: list all files from the root directory. You can use it if the directory path does not contain a big list of files (for performance reasons) Some protocols defined in the fsspec package are wrappers around other protocols. For example, the SimpleCacheFileSystem protocol is a wrapper around any local filesystem protocol. In such cases, you must specify into the protocol options the protocol to be wrapped and the options to be passed to the wrapped protocol. For example, if you want to create a backend that uses the SimpleCacheFileSystem protocol, after selecting the SimpleCacheFileSystem protocol, you must specify the protocol options as follows: .. code:: python { "directory_path": "/tmp/my_backend", "target_protocol": "odoofs", "target_options": {...}, } In this example, the SimpleCacheFileSystem protocol will be used as a wrapper around the odoofs protocol. Server Environment ------------------ To ease the management of the filesystem storages configuration accross the different environments, the configuration of the filesystem storages can be defined in environment files or directly in the main configuration file. For example, the configuration of a filesystem storage with the code fsprod can be provided in the main configuration file as follows: .. code:: ini [fs_storage.fsprod] protocol=s3 options={"endpoint_url": "https://my_s3_server/", "key": "KEY", "secret": "SECRET"} directory_path=my_bucket To work, a storage.backend record must exist with the code fsprod into the database. In your configuration section, you can specify the value for the following fields: - protocol - options - directory_path When evaluating directory_path, ``{db_name}`` is replaced by the database name. This is usefull in multi-tenant with a setup completly controlled by configuration files. Migration from storage_backend ------------------------------ The fs_storage addon can be used to replace the storage_backend addon. (It has been designed to be a drop-in replacement for the storage_backend addon). To ease the migration, the fs.storage model defines the high-level methods available in the storage_backend model. These methods are: - add - get - list_files - find_files - move_files - delete These methods are wrappers around the methods of the fsspec.AbstractFileSystem class (see https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem). These methods are marked as deprecated and will be removed in a future version (V18) of the addon. You should use the methods of the fsspec.AbstractFileSystem class instead since they are more flexible and powerful. You can access the instance of the fsspec.AbstractFileSystem class using the fs property of a fs.storage record. Known issues / Roadmap ====================== - Transactions: fsspec comes with a transactional mechanism that once started, gathers all the files created during the transaction, and if the transaction is committed, moves them to their final locations. It would be useful to bridge this with the transactional mechanism of odoo. This would allow to ensure that all the files created during a transaction are either all moved to their final locations, or all deleted if the transaction is rolled back. This mechanism is only valid for files created during the transaction by a call to the open method of the file system. It is not valid for others operations, such as rm, mv_file, ... . Changelog ========= 17.0.2.1.0 (2025-10-22) ----------------------- Features ~~~~~~~~ - Replace {db_name} by the database name in directory_path (`#db_name <https://github.com/OCA/storage/issues/db_name>`__) 17.0.2.0.4 (2025-08-19) ----------------------- Features ~~~~~~~~ - Allow setting check_connection_method in configuration file. 17.0.2.0.0 (2024-10-07) ----------------------- Features ~~~~~~~~ - Invalidate FS filesystem object cache when the connection fails, forcing a reconnection. (`#320 <https://github.com/OCA/storage/issues/320>`__) 16.0.1.1.0 (2023-12-22) ----------------------- **Features** - Add parameter on storage backend to resolve protocol options values starting with $ from environment variables (`#303 <https://github.com/OCA/storage/issues/303>`__) 16.0.1.0.3 (2023-10-17) ----------------------- **Bugfixes** - Fix access to technical models to be able to upload attachments for users with basic access (`#289 <https://github.com/OCA/storage/issues/289>`__) 16.0.1.0.2 (2023-10-09) ----------------------- **Bugfixes** - Avoid config error when using the webdav protocol. The auth option is expected to be a tuple not a list. Since our config is loaded from a json file, we cannot use tuples. The fix converts the list to a tuple when the config is related to a webdav protocol and the auth option is into the confix. (`#285 <https://github.com/OCA/storage/issues/285>`__) Bug Tracker =========== Bugs are tracked on `GitHub Issues <https://github.com/OCA/storage/issues>`_. In case of trouble, please check there if your issue has already been reported. If you spotted it first, help us to smash it by providing a detailed and welcomed `feedback <https://github.com/OCA/storage/issues/new?body=module:%20fs_storage%0Aversion:%2017.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_. Do not contact contributors directly about support or help with technical issues. Credits ======= Authors ------- * ACSONE SA/NV Contributors ------------ - Laurent Mignon <laurent.mignon@acsone.eu> - Sébastien BEAU <sebastien.beau@akretion.com> Maintainers ----------- This module is maintained by the OCA. .. image:: https://odoo-community.org/logo.png :alt: Odoo Community Association :target: https://odoo-community.org OCA, or the Odoo Community Association, is a nonprofit organization whose mission is to support the collaborative development of Odoo features and promote its widespread use. This module is part of the `OCA/storage <https://github.com/OCA/storage/tree/17.0/fs_storage>`_ project on GitHub. You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
text/x-rst
ACSONE SA/NV, Odoo Community Association (OCA)
support@odoo-community.org
null
null
LGPL-3
null
[ "Programming Language :: Python", "Framework :: Odoo", "Framework :: Odoo :: 17.0", "License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)", "Development Status :: 4 - Beta" ]
[]
https://github.com/OCA/storage
null
>=3.10
[]
[]
[]
[ "fsspec>=2024.5.0", "odoo-addon-server_environment<17.1dev,>=17.0dev", "odoo<17.1dev,>=17.0a" ]
[]
[]
[]
[]
twine/6.2.0 CPython/3.12.3
2026-01-16T04:24:02.791615
odoo_addon_fs_storage-17.0.2.1.0.1-py3-none-any.whl
61,902
41/7c/2c10383f3a51e09be871089e537da2f793759de7c191c4666903316e5553/odoo_addon_fs_storage-17.0.2.1.0.1-py3-none-any.whl
py3
bdist_wheel
null
false
1da68b979199c3a29cbddec45862ebd8
8dbc95c8d475c6545ad2f29daabef0ccab4ee28e40901e6c2a1b6a4473d7f016
417c2c10383f3a51e09be871089e537da2f793759de7c191c4666903316e5553
null
[]
2.1
odoo-addon-fs-storage
19.0.1.0.0.5
Implement the concept of Storage with amazon S3, sftp...
.. image:: https://odoo-community.org/readme-banner-image :target: https://odoo-community.org/get-involved?utm_source=readme :alt: Odoo Community Association ========================== Filesystem Storage Backend ========================== .. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !! This file is generated by oca-gen-addon-readme !! !! changes will be overwritten. !! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !! source digest: sha256:b29d04846136913c47213b78a13afecebd5b01937eddbae183e25ea982dab818 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! .. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png :target: https://odoo-community.org/page/development-status :alt: Beta .. |badge2| image:: https://img.shields.io/badge/license-LGPL--3-blue.png :target: http://www.gnu.org/licenses/lgpl-3.0-standalone.html :alt: License: LGPL-3 .. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fstorage-lightgray.png?logo=github :target: https://github.com/OCA/storage/tree/19.0/fs_storage :alt: OCA/storage .. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png :target: https://translation.odoo-community.org/projects/storage-19-0/storage-19-0-fs_storage :alt: Translate me on Weblate .. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png :target: https://runboat.odoo-community.org/builds?repo=OCA/storage&target_branch=19.0 :alt: Try me on Runboat |badge1| |badge2| |badge3| |badge4| |badge5| This addon is a technical addon that allows you to define filesystem like storage for your data. It's used by other addons to store their data in a transparent way into different kind of storages. Through the fs.storage record, you get access to an object that implements the `fsspec.spec.AbstractFileSystem <https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem>`__ interface and therefore give you an unified interface to access your data whatever the storage protocol you decide to use. The list of supported protocols depends on the installed fsspec implementations. By default, the addon will install the following protocols: - LocalFileSystem - MemoryFileSystem - ZipFileSystem - TarFileSystem - FTPFileSystem - CachingFileSystem - WholeFileSystem - SimplCacheFileSystem - ReferenceFileSystem - GenericFileSystem - DirFileSystem - DatabricksFileSystem - GitHubFileSystem - JupiterFileSystem - OdooFileSystem The OdooFileSystem is the one that allows you to store your data into a directory mounted into your Odoo's storage directory. This is the default FS Storage when creating a new fs.storage record. Others protocols are available through the installation of additional python packages: - DropboxDriveFileSystem -> pip install fsspec[dropbox] - HTTPFileSystem -> pip install fsspec[http] - HTTPSFileSystem -> pip install fsspec[http] - GCSFileSystem -> pip install fsspec[gcs] - GSFileSystem -> pip install fsspec[gs] - GoogleDriveFileSystem -> pip install gdrivefs - SFTPFileSystem -> pip install fsspec[sftp] - HaddoopFileSystem -> pip install fsspec[hdfs] - S3FileSystem -> pip install fsspec[s3] - WandbFS -> pip install wandbfs - OCIFileSystem -> pip install fsspec[oci] - AsyncLocalFileSystem -> pip install 'morefs[asynclocalfs] - AzureDatalakeFileSystem -> pip install fsspec[adl] - AzureBlobFileSystem -> pip install fsspec[abfs] - DaskWorkerFileSystem -> pip install fsspec[dask] - GitFileSystem -> pip install fsspec[git] - SMBFileSystem -> pip install fsspec[smb] - LibArchiveFileSystem -> pip install fsspec[libarchive] - OSSFileSystem -> pip install ossfs - WebdavFileSystem -> pip install webdav4 - DVCFileSystem -> pip install dvc - XRootDFileSystem -> pip install fsspec-xrootd This list of supported protocols is not exhaustive or could change in the future depending on the fsspec releases. You can find more information about the supported protocols on the `fsspec documentation <https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem>`__. **Table of contents** .. contents:: :local: Usage ===== Configuration ------------- When you create a new backend, you must specify the following: - The name of the backend. This is the name that will be used to identify the backend into Odoo - The code of the backend. This code will identify the backend into the store_fname field of the ir.attachment model. This code must be unique. It will be used as scheme. example of the store_fname field: ``odoofs://abs34Tg11``. - The protocol used by the backend. The protocol refers to the supported protocols of the fsspec python package. - A directory path. This is a root directory from which the filesystem will be mounted. This directory must exist. - The protocol options. These are the options that will be passed to the fsspec python package when creating the filesystem. These options depend on the protocol used and are described in the fsspec documentation. - Resolve env vars. This options resolves the protocol options values starting with $ from environment variables - Check Connection Method. If set, Odoo will always check the connection before using a storage and it will remove the fs connection from the cache if the check fails. - ``Create Marker file``: create a hidden file on remote and then check it exists with Use it if you have write access to the remote and if it is not an issue to leave the marker file in the root directory. - ``List file``: list all files from the root directory. You can use it if the directory path does not contain a big list of files (for performance reasons) Some protocols defined in the fsspec package are wrappers around other protocols. For example, the SimpleCacheFileSystem protocol is a wrapper around any local filesystem protocol. In such cases, you must specify into the protocol options the protocol to be wrapped and the options to be passed to the wrapped protocol. For example, if you want to create a backend that uses the SimpleCacheFileSystem protocol, after selecting the SimpleCacheFileSystem protocol, you must specify the protocol options as follows: .. code:: python { "directory_path": "/tmp/my_backend", "target_protocol": "odoofs", "target_options": {...}, } In this example, the SimpleCacheFileSystem protocol will be used as a wrapper around the odoofs protocol. Server Environment ------------------ To ease the management of the filesystem storages configuration accross the different environments, the configuration of the filesystem storages can be defined in environment files or directly in the main configuration file. For example, the configuration of a filesystem storage with the code fsprod can be provided in the main configuration file as follows: .. code:: ini [fs_storage.fsprod] protocol=s3 options={"endpoint_url": "https://my_s3_server/", "key": "KEY", "secret": "SECRET"} directory_path=my_bucket To work, a storage.backend record must exist with the code fsprod into the database. In your configuration section, you can specify the value for the following fields: - protocol - options - directory_path Migration from storage_backend ------------------------------ The fs_storage addon can be used to replace the storage_backend addon. (It has been designed to be a drop-in replacement for the storage_backend addon). To ease the migration, the fs.storage model defines the high-level methods available in the storage_backend model. These methods are: - add - get - list_files - find_files - move_files - delete These methods are wrappers around the methods of the fsspec.AbstractFileSystem class (see https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem). These methods are marked as deprecated and will be removed in a future version (V18) of the addon. You should use the methods of the fsspec.AbstractFileSystem class instead since they are more flexible and powerful. You can access the instance of the fsspec.AbstractFileSystem class using the fs property of a fs.storage record. Known issues / Roadmap ====================== - Transactions: fsspec comes with a transactional mechanism that once started, gathers all the files created during the transaction, and if the transaction is committed, moves them to their final locations. It would be useful to bridge this with the transactional mechanism of odoo. This would allow to ensure that all the files created during a transaction are either all moved to their final locations, or all deleted if the transaction is rolled back. This mechanism is only valid for files created during the transaction by a call to the open method of the file system. It is not valid for others operations, such as rm, mv_file, ... . Changelog ========= 18.0.2.0.1 (2025-07-23) ----------------------- Features ~~~~~~~~ - Allow setting check_connection_method in configuration file. 18.0.1.0.1 (2024-11-10) ----------------------- Features ~~~~~~~~ - Invalidate FS filesystem object cache when the connection fails, forcing a reconnection. (`#320 <https://github.com/OCA/storage/issues/320>`__) 16.0.1.1.0 (2023-12-22) ----------------------- **Features** - Add parameter on storage backend to resolve protocol options values starting with $ from environment variables (`#303 <https://github.com/OCA/storage/issues/303>`__) 16.0.1.0.3 (2023-10-17) ----------------------- **Bugfixes** - Fix access to technical models to be able to upload attachments for users with basic access (`#289 <https://github.com/OCA/storage/issues/289>`__) 16.0.1.0.2 (2023-10-09) ----------------------- **Bugfixes** - Avoid config error when using the webdav protocol. The auth option is expected to be a tuple not a list. Since our config is loaded from a json file, we cannot use tuples. The fix converts the list to a tuple when the config is related to a webdav protocol and the auth option is into the confix. (`#285 <https://github.com/OCA/storage/issues/285>`__) Bug Tracker =========== Bugs are tracked on `GitHub Issues <https://github.com/OCA/storage/issues>`_. In case of trouble, please check there if your issue has already been reported. If you spotted it first, help us to smash it by providing a detailed and welcomed `feedback <https://github.com/OCA/storage/issues/new?body=module:%20fs_storage%0Aversion:%2019.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_. Do not contact contributors directly about support or help with technical issues. Credits ======= Authors ------- * ACSONE SA/NV Contributors ------------ - Laurent Mignon <laurent.mignon@acsone.eu> - Sébastien BEAU <sebastien.beau@akretion.com> - Marie Lejeune <marie.lejeune@acsone.eu> - Julien Coux <julien.coux@camptocamp.com> Maintainers ----------- This module is maintained by the OCA. .. image:: https://odoo-community.org/logo.png :alt: Odoo Community Association :target: https://odoo-community.org OCA, or the Odoo Community Association, is a nonprofit organization whose mission is to support the collaborative development of Odoo features and promote its widespread use. This module is part of the `OCA/storage <https://github.com/OCA/storage/tree/19.0/fs_storage>`_ project on GitHub. You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
text/x-rst
ACSONE SA/NV, Odoo Community Association (OCA)
support@odoo-community.org
null
null
LGPL-3
null
[ "Programming Language :: Python", "Framework :: Odoo", "Framework :: Odoo :: 19.0", "License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)", "Development Status :: 4 - Beta" ]
[]
https://github.com/OCA/storage
null
null
[]
[]
[]
[ "fsspec>=2024.5.0", "odoo-addon-server_environment==19.0.*", "odoo==19.0.*" ]
[]
[]
[]
[]
twine/6.2.0 CPython/3.12.3
2026-01-16T04:24:16.973510
odoo_addon_fs_storage-19.0.1.0.0.5-py3-none-any.whl
66,712
2e/ff/15f04c3ddc2300001b0d3800f126d0cbb03b90d96502499e698fff84475c/odoo_addon_fs_storage-19.0.1.0.0.5-py3-none-any.whl
py3
bdist_wheel
null
false
369e1083da167ac5661eb0fbc2e93fa2
3361faa6c61b5299fd7cfc026fac6981546a0b8f44e32d017acdfccdcb4af82b
2eff15f04c3ddc2300001b0d3800f126d0cbb03b90d96502499e698fff84475c
null
[]
2.3
dj-playlist-optimizer
1.1.0
DJ playlist optimizer using Google OR-Tools for harmonic mixing and BPM matching
# 🎧 DJ Playlist Optimizer Optimize DJ playlists for harmonic mixing using Google OR-Tools constraint programming. ## Features - ✨ **Longest Path Optimization**: Finds the maximum number of tracks that can be mixed together - 🎵 **Harmonic Mixing**: Uses the Camelot Wheel system for key compatibility - 🎧 **Rekordbox Integration**: Read playlists directly from your local Rekordbox 6/7 database (tested with v7.2.8) - 🔊 **BPM Matching**: Supports direct, halftime, and doubletime BPM compatibility - ⚙️ **Configurable Strictness**: STRICT, MODERATE, or RELAXED harmonic compatibility levels - 📤 **Rekordbox Export**: Export results to Rekordbox XML or write directly to the Rekordbox database - 🚀 **Fast**: Powered by Google OR-Tools CP-SAT solver (award-winning constraint solver) - 📦 **SDK + CLI**: Use as a Python library or command-line tool ## Installation ```bash uv add dj-playlist-optimizer ``` Or with pip: ```bash pip install dj-playlist-optimizer ``` ## Quick Start ### SDK Usage ```python from dj_playlist_optimizer import PlaylistOptimizer, Track, HarmonicLevel tracks = [ Track(id="track_001", key="8A", bpm=128), Track(id="track_002", key="8B", bpm=130), Track(id="track_003", key="9A", bpm=125), ] optimizer = PlaylistOptimizer( bpm_tolerance=10, allow_halftime_bpm=True, max_violation_pct=0.10, harmonic_level=HarmonicLevel.STRICT, ) result = optimizer.optimize(tracks) for i, track in enumerate(result.playlist, 1): print(f"{i}. {track.id} ({track.key}, {track.bpm} BPM)") ``` ### CLI Usage ```bash # Basic usage dj-optimize tracks.json # With custom settings dj-optimize tracks.json --bpm-tolerance 8 --harmonic-level moderate # Save results to JSON dj-optimize tracks.json --output result.json # Use with Rekordbox (v6/v7) dj-optimize --rekordbox # List playlists dj-optimize --rekordbox --playlist "Techno" # Optimize specific playlist dj-optimize --rekordbox --playlist "Techno" --output r.xml # Export to Rekordbox XML dj-optimize --rekordbox --playlist "Techno" --write-to-db # Write directly to Rekordbox DB # Enable verbose logging dj-optimize tracks.json -v # INFO level dj-optimize tracks.json -vv # DEBUG level ``` ## Rekordbox Integration The tool provides two ways to save your optimized playlists back to Rekordbox: ### 1. XML Export (Recommended) Export the results to an XML file that can be imported into Rekordbox: ```bash dj-optimize --rekordbox --playlist "My Playlist" --output optimized.xml ``` In Rekordbox: 1. Go to **File > Import > Import Playlist** 2. Select `optimized.xml` 3. The playlist will appear in the `ROOT` folder (e.g., `My Playlist_20260115_120000`) ### 2. Direct Database Write (Advanced) Write the optimized playlist directly to your Rekordbox 6 database: ```bash dj-optimize --rekordbox --playlist "My Playlist" --write-to-db ``` **⚠️ WARNING:** - **Close Rekordbox** before running this command. - This modifies your `master.db` file directly. - **Backup your database** before using this feature. ## Input Format JSON file with tracks containing `id`, `key` (Camelot notation), and `bpm`: ```json { "tracks": [ {"id": "track_001", "key": "8A", "bpm": 128}, {"id": "track_002", "key": "8B", "bpm": 130}, {"id": "track_003", "key": "9A", "bpm": 125} ] } ``` ## How It Works ### 1. BPM Compatibility Adjacent tracks must have compatible BPMs within tolerance: | Track A | Track B | Tolerance | Match? | Reason | |---------|---------|-----------|--------|--------| | 128 BPM | 130 BPM | ±10 | ✅ | Direct (diff = 2) | | 128 BPM | 64 BPM | ±10 | ✅ | Half-time (128 = 64×2) | | 75 BPM | 150 BPM | ±10 | ✅ | Double-time (75×2 = 150) | | 128 BPM | 100 BPM | ±10 | ❌ | Too far | ### 2. Harmonic Mixing (Camelot Wheel) Harmonic compatibility levels: **STRICT** (default): - Same key (8A → 8A) - ±1 hour same letter (8A → 7A, 9A) - Same hour different letter (8A → 8B) **MODERATE**: - Above + ±1 hour different letter (8A → 9B, 7B) **RELAXED**: - Above + ±3 hours (8A → 5A, 11A) ### 3. Optimization Goal Maximize playlist length while keeping non-harmonic transitions below the threshold (default: 10%). ## Configuration Options | Parameter | Default | Description | |-----------|---------|-------------| | `bpm_tolerance` | 10.0 | Maximum BPM difference for direct match | | `allow_halftime_bpm` | True | Enable half/double-time matching | | `max_violation_pct` | 0.10 | Max percentage of non-harmonic transitions | | `harmonic_level` | STRICT | Harmonic compatibility strictness | | `time_limit_seconds` | 60.0 | Solver time limit | ## Examples See `examples/` directory: - `example_tracks.json` - Sample input data - `sdk_usage.py` - SDK usage demonstration - `logging_example.py` - Logging configuration example ## Logging The library uses Python's standard `logging` module. Configure logging to see detailed information about the optimization process: ```python import logging logging.basicConfig( level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", ) ``` Log levels: - `WARNING` (default): Errors and warnings only - `INFO`: Optimization progress, statistics, and results - `DEBUG`: Detailed solver information, edge counts, and configuration CLI verbosity: - No flag: WARNING level - `-v`: INFO level - `-vv`: DEBUG level ## Development ```bash # Clone repository git clone https://github.com/yourusername/dj-playlist-optimizer cd dj-playlist-optimizer # Install with dev dependencies uv sync --dev # Install pre-commit hooks uv run pre-commit install uv run pre-commit install --hook-type commit-msg # Run tests uv run pytest # Lint and format uv run ruff check # Check for issues uv run ruff check --fix # Auto-fix issues uv run ruff format # Format code # Run pre-commit on all files uv run pre-commit run --all-files # Run example uv run python examples/sdk_usage.py ``` ### Commit Message Format This project enforces [Conventional Commits](https://www.conventionalcommits.org/): ``` <type>: <description> [optional body] [optional footer] ``` **Allowed types:** - `feat` - New feature - `fix` - Bug fix - `docs` - Documentation changes - `style` - Code style changes (formatting, etc.) - `refactor` - Code refactoring - `perf` - Performance improvements - `test` - Test changes - `build` - Build system changes - `ci` - CI configuration changes - `chore` - Other changes (deps, etc.) - `revert` - Revert previous commit **Examples:** ```bash git commit -m "feat: add halftime BPM matching" git commit -m "fix: correct Camelot wheel compatibility check" git commit -m "docs: update README with logging examples" git commit -m "refactor: simplify return statements in bpm.py" ``` ## How the Solver Works The optimizer uses Google OR-Tools CP-SAT solver with: 1. **Binary Variables**: `included[i]` = track i is in playlist 2. **Edge Variables**: `edge[i,j]` = track j follows track i 3. **Circuit Constraint**: `AddCircuit` ensures valid track ordering 4. **BPM Constraints**: Only create edges between BPM-compatible tracks 5. **Harmonic Soft Constraints**: Penalize non-harmonic transitions 6. **Objective**: Maximize `sum(included)` ## License MIT ## Credits Built with: - [Google OR-Tools](https://developers.google.com/optimization) - Constraint programming solver - [pyrekordbox](https://github.com/dylanljones/pyrekordbox) - Rekordbox database access - Camelot Wheel system by Mark Davis (Mixed In Key)
text/markdown
Sage Choi
Sage Choi <sage.choi@gmail.com>
null
null
null
null
[]
[]
null
null
>=3.10
[]
[]
[]
[ "ortools>=9.8.3296", "pyrekordbox>=0.1.0", "pytest>=7.4.0; extra == \"dev\"", "pytest-cov>=4.1.0; extra == \"dev\"" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:25:37.351002
dj_playlist_optimizer-1.1.0-py3-none-any.whl
22,815
d3/14/dec8190a5a61a5c3a464de5067964f948472d28495179600bcdfaa637e10/dj_playlist_optimizer-1.1.0-py3-none-any.whl
py3
bdist_wheel
null
false
b1f6784e95f39e4f362a9a3fcc35f9dd
bafe7685f52a9a5e3289f0e5fce0b49be10a5bc404e9c01553ffd985b9a70343
d314dec8190a5a61a5c3a464de5067964f948472d28495179600bcdfaa637e10
null
[]
2.3
dj-playlist-optimizer
1.1.0
DJ playlist optimizer using Google OR-Tools for harmonic mixing and BPM matching
# 🎧 DJ Playlist Optimizer Optimize DJ playlists for harmonic mixing using Google OR-Tools constraint programming. ## Features - ✨ **Longest Path Optimization**: Finds the maximum number of tracks that can be mixed together - 🎵 **Harmonic Mixing**: Uses the Camelot Wheel system for key compatibility - 🎧 **Rekordbox Integration**: Read playlists directly from your local Rekordbox 6/7 database (tested with v7.2.8) - 🔊 **BPM Matching**: Supports direct, halftime, and doubletime BPM compatibility - ⚙️ **Configurable Strictness**: STRICT, MODERATE, or RELAXED harmonic compatibility levels - 📤 **Rekordbox Export**: Export results to Rekordbox XML or write directly to the Rekordbox database - 🚀 **Fast**: Powered by Google OR-Tools CP-SAT solver (award-winning constraint solver) - 📦 **SDK + CLI**: Use as a Python library or command-line tool ## Installation ```bash uv add dj-playlist-optimizer ``` Or with pip: ```bash pip install dj-playlist-optimizer ``` ## Quick Start ### SDK Usage ```python from dj_playlist_optimizer import PlaylistOptimizer, Track, HarmonicLevel tracks = [ Track(id="track_001", key="8A", bpm=128), Track(id="track_002", key="8B", bpm=130), Track(id="track_003", key="9A", bpm=125), ] optimizer = PlaylistOptimizer( bpm_tolerance=10, allow_halftime_bpm=True, max_violation_pct=0.10, harmonic_level=HarmonicLevel.STRICT, ) result = optimizer.optimize(tracks) for i, track in enumerate(result.playlist, 1): print(f"{i}. {track.id} ({track.key}, {track.bpm} BPM)") ``` ### CLI Usage ```bash # Basic usage dj-optimize tracks.json # With custom settings dj-optimize tracks.json --bpm-tolerance 8 --harmonic-level moderate # Save results to JSON dj-optimize tracks.json --output result.json # Use with Rekordbox (v6/v7) dj-optimize --rekordbox # List playlists dj-optimize --rekordbox --playlist "Techno" # Optimize specific playlist dj-optimize --rekordbox --playlist "Techno" --output r.xml # Export to Rekordbox XML dj-optimize --rekordbox --playlist "Techno" --write-to-db # Write directly to Rekordbox DB # Enable verbose logging dj-optimize tracks.json -v # INFO level dj-optimize tracks.json -vv # DEBUG level ``` ## Rekordbox Integration The tool provides two ways to save your optimized playlists back to Rekordbox: ### 1. XML Export (Recommended) Export the results to an XML file that can be imported into Rekordbox: ```bash dj-optimize --rekordbox --playlist "My Playlist" --output optimized.xml ``` In Rekordbox: 1. Go to **File > Import > Import Playlist** 2. Select `optimized.xml` 3. The playlist will appear in the `ROOT` folder (e.g., `My Playlist_20260115_120000`) ### 2. Direct Database Write (Advanced) Write the optimized playlist directly to your Rekordbox 6 database: ```bash dj-optimize --rekordbox --playlist "My Playlist" --write-to-db ``` **⚠️ WARNING:** - **Close Rekordbox** before running this command. - This modifies your `master.db` file directly. - **Backup your database** before using this feature. ## Input Format JSON file with tracks containing `id`, `key` (Camelot notation), and `bpm`: ```json { "tracks": [ {"id": "track_001", "key": "8A", "bpm": 128}, {"id": "track_002", "key": "8B", "bpm": 130}, {"id": "track_003", "key": "9A", "bpm": 125} ] } ``` ## How It Works ### 1. BPM Compatibility Adjacent tracks must have compatible BPMs within tolerance: | Track A | Track B | Tolerance | Match? | Reason | |---------|---------|-----------|--------|--------| | 128 BPM | 130 BPM | ±10 | ✅ | Direct (diff = 2) | | 128 BPM | 64 BPM | ±10 | ✅ | Half-time (128 = 64×2) | | 75 BPM | 150 BPM | ±10 | ✅ | Double-time (75×2 = 150) | | 128 BPM | 100 BPM | ±10 | ❌ | Too far | ### 2. Harmonic Mixing (Camelot Wheel) Harmonic compatibility levels: **STRICT** (default): - Same key (8A → 8A) - ±1 hour same letter (8A → 7A, 9A) - Same hour different letter (8A → 8B) **MODERATE**: - Above + ±1 hour different letter (8A → 9B, 7B) **RELAXED**: - Above + ±3 hours (8A → 5A, 11A) ### 3. Optimization Goal Maximize playlist length while keeping non-harmonic transitions below the threshold (default: 10%). ## Configuration Options | Parameter | Default | Description | |-----------|---------|-------------| | `bpm_tolerance` | 10.0 | Maximum BPM difference for direct match | | `allow_halftime_bpm` | True | Enable half/double-time matching | | `max_violation_pct` | 0.10 | Max percentage of non-harmonic transitions | | `harmonic_level` | STRICT | Harmonic compatibility strictness | | `time_limit_seconds` | 60.0 | Solver time limit | ## Examples See `examples/` directory: - `example_tracks.json` - Sample input data - `sdk_usage.py` - SDK usage demonstration - `logging_example.py` - Logging configuration example ## Logging The library uses Python's standard `logging` module. Configure logging to see detailed information about the optimization process: ```python import logging logging.basicConfig( level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", ) ``` Log levels: - `WARNING` (default): Errors and warnings only - `INFO`: Optimization progress, statistics, and results - `DEBUG`: Detailed solver information, edge counts, and configuration CLI verbosity: - No flag: WARNING level - `-v`: INFO level - `-vv`: DEBUG level ## Development ```bash # Clone repository git clone https://github.com/yourusername/dj-playlist-optimizer cd dj-playlist-optimizer # Install with dev dependencies uv sync --dev # Install pre-commit hooks uv run pre-commit install uv run pre-commit install --hook-type commit-msg # Run tests uv run pytest # Lint and format uv run ruff check # Check for issues uv run ruff check --fix # Auto-fix issues uv run ruff format # Format code # Run pre-commit on all files uv run pre-commit run --all-files # Run example uv run python examples/sdk_usage.py ``` ### Commit Message Format This project enforces [Conventional Commits](https://www.conventionalcommits.org/): ``` <type>: <description> [optional body] [optional footer] ``` **Allowed types:** - `feat` - New feature - `fix` - Bug fix - `docs` - Documentation changes - `style` - Code style changes (formatting, etc.) - `refactor` - Code refactoring - `perf` - Performance improvements - `test` - Test changes - `build` - Build system changes - `ci` - CI configuration changes - `chore` - Other changes (deps, etc.) - `revert` - Revert previous commit **Examples:** ```bash git commit -m "feat: add halftime BPM matching" git commit -m "fix: correct Camelot wheel compatibility check" git commit -m "docs: update README with logging examples" git commit -m "refactor: simplify return statements in bpm.py" ``` ## How the Solver Works The optimizer uses Google OR-Tools CP-SAT solver with: 1. **Binary Variables**: `included[i]` = track i is in playlist 2. **Edge Variables**: `edge[i,j]` = track j follows track i 3. **Circuit Constraint**: `AddCircuit` ensures valid track ordering 4. **BPM Constraints**: Only create edges between BPM-compatible tracks 5. **Harmonic Soft Constraints**: Penalize non-harmonic transitions 6. **Objective**: Maximize `sum(included)` ## License MIT ## Credits Built with: - [Google OR-Tools](https://developers.google.com/optimization) - Constraint programming solver - [pyrekordbox](https://github.com/dylanljones/pyrekordbox) - Rekordbox database access - Camelot Wheel system by Mark Davis (Mixed In Key)
text/markdown
Sage Choi
Sage Choi <sage.choi@gmail.com>
null
null
null
null
[]
[]
null
null
>=3.10
[]
[]
[]
[ "ortools>=9.8.3296", "pyrekordbox>=0.1.0", "pytest>=7.4.0; extra == \"dev\"", "pytest-cov>=4.1.0; extra == \"dev\"" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:25:38.825878
dj_playlist_optimizer-1.1.0.tar.gz
18,209
11/68/d9ffcc563bf4cb688fa814081842e17b81ca5cb97bef082102eb5de177d8/dj_playlist_optimizer-1.1.0.tar.gz
source
sdist
null
false
4f94b45da2587ffc906059f0686ff91e
ca796df41abe652f6e4b51d6acd01504e928ef5ba7c3c7181e1dd63575f728a8
1168d9ffcc563bf4cb688fa814081842e17b81ca5cb97bef082102eb5de177d8
null
[]
2.4
hubify-dataset
0.1.0
Convert object detection datasets (COCO, YOLO, Pascal VOC, etc.) to HuggingFace format
# Hubify ![Test & Lint](https://github.com/benjamintli/coco2hf/workflows/Test%20%26%20Lint/badge.svg) ![CLI Smoke Test](https://github.com/benjamintli/coco2hf/workflows/CLI%20Smoke%20Test/badge.svg) Convert object detection datasets to HuggingFace format and upload to the Hub. **Currently supported formats:** - COCO format annotations - YOLO format annotations - YOLO OBB format annotations **Coming soon:** Pascal VOC, Labelme, and more! ## Motivations for this tool HuggingFace has become the defacto *open source* community to upload datasets and models. It's primarily about LLMs and language models, but there's nothing about HuggingFace's dataset hosting that's specific to language modeling. This tool is meant to be a way to consolidate the different formats from the object detection domain (COCO, Pascal VOC, etc) into what HuggingFace suggests for their Image Datasets, and upload it to HuggingFace Hub. ## Installation ```bash # Install with uv (recommended) uv pip install -e . # Or with pip pip install -e . ``` ## Usage After installation, you can use the `hubify` command: ```bash # Auto-detect annotations in train/validation/test directories hubify --data-dir /path/to/images --format coco # Manually specify annotation files hubify --data-dir /path/to/images \ --train-annotations /path/to/instances_train2017.json \ --validation-annotations /path/to/instances_val2017.json # Generate sample visualizations hubify --data-dir /path/to/images --visualize # Push to HuggingFace Hub hubify --data-dir /path/to/images \ --train-annotations /path/to/instances_train2017.json \ --push-to-hub username/my-dataset ``` Or for yolo: ``` hubify --data-dir ~/Downloads/DOTAv1.5 --format yolo-obb --push-to-hub benjamintli/dota-v1.5 hubify --data-dir ~/Downloads/DOTAv1.5 --format yolo --push-to-hub benjamintli/dota-v1.5 ``` Or run directly with Python (from the virtual environment): ```bash source .venv/bin/activate python -m src.main --data-dir /path/to/images ``` ## Expected Directory Structure * For coco: ``` data-dir/ ├── train/ │ ├── instances*.json (auto-detected) │ └── *.jpg (images) ├── validation/ │ ├── instances*.json (auto-detected) │ └── *.jpg (images) └── test/ (optional) ├── instances*.json └── *.jpg ``` ## Output The tool generates `metadata.jsonl` files in each split directory: ``` data-dir/ ├── train/ │ └── metadata.jsonl └── validation/ └── metadata.jsonl ``` Each line in `metadata.jsonl` contains: ```json { "file_name": "image.jpg", "objects": { "bbox": [[x, y, width, height], ...], "category": [0, 1, ...] } } ``` ## Options - `--data-dir`: Root directory containing train/validation/test subdirectories (required) - `--train-annotations`: Path to training annotations JSON (optional) - `--validation-annotations`: Path to validation annotations JSON (optional) - `--test-annotations`: Path to test annotations JSON (optional) - `--visualize`: Generate sample visualization images with bounding boxes - `--push-to-hub`: Push dataset to HuggingFace Hub (format: `username/dataset-name`) - `--token`: HuggingFace API token (optional, defaults to `HF_TOKEN` env var or `huggingface-cli login`) ### Authentication for Hub Push When using `--push-to-hub`, the tool looks for your HuggingFace token in this order: 1. `--token YOUR_TOKEN` (CLI argument) 2. `HF_TOKEN` environment variable 3. Token from `huggingface-cli login` If no token is found, you'll get a helpful error message with instructions.
text/markdown
null
null
null
null
null
null
[]
[]
null
null
>=3.12
[]
[]
[]
[ "datasets>=4.4.2", "huggingface-hub>=1.2.3", "pillow>=12.1.0", "pyyaml>=6.0", "rich>=13.9.4", "ruff>=0.14.10", "ruff>=0.14.10; extra == \"dev\"" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:26:23.420857
hubify_dataset-0.1.0-py3-none-any.whl
17,431
b0/77/4c4a7ddc14aadfbcc69ac9f05976c362154784165fe7013b9bb4e993e019/hubify_dataset-0.1.0-py3-none-any.whl
py3
bdist_wheel
null
false
47f913c0d59a3cacde6695a4852c8875
98b13ded23b75cd7d231c6a4b1553d622032c416720fafbbaac31a62cbd13da2
b0774c4a7ddc14aadfbcc69ac9f05976c362154784165fe7013b9bb4e993e019
null
[ "LICENSE" ]
2.4
hubify-dataset
0.1.0
Convert object detection datasets (COCO, YOLO, Pascal VOC, etc.) to HuggingFace format
# Hubify ![Test & Lint](https://github.com/benjamintli/coco2hf/workflows/Test%20%26%20Lint/badge.svg) ![CLI Smoke Test](https://github.com/benjamintli/coco2hf/workflows/CLI%20Smoke%20Test/badge.svg) Convert object detection datasets to HuggingFace format and upload to the Hub. **Currently supported formats:** - COCO format annotations - YOLO format annotations - YOLO OBB format annotations **Coming soon:** Pascal VOC, Labelme, and more! ## Motivations for this tool HuggingFace has become the defacto *open source* community to upload datasets and models. It's primarily about LLMs and language models, but there's nothing about HuggingFace's dataset hosting that's specific to language modeling. This tool is meant to be a way to consolidate the different formats from the object detection domain (COCO, Pascal VOC, etc) into what HuggingFace suggests for their Image Datasets, and upload it to HuggingFace Hub. ## Installation ```bash # Install with uv (recommended) uv pip install -e . # Or with pip pip install -e . ``` ## Usage After installation, you can use the `hubify` command: ```bash # Auto-detect annotations in train/validation/test directories hubify --data-dir /path/to/images --format coco # Manually specify annotation files hubify --data-dir /path/to/images \ --train-annotations /path/to/instances_train2017.json \ --validation-annotations /path/to/instances_val2017.json # Generate sample visualizations hubify --data-dir /path/to/images --visualize # Push to HuggingFace Hub hubify --data-dir /path/to/images \ --train-annotations /path/to/instances_train2017.json \ --push-to-hub username/my-dataset ``` Or for yolo: ``` hubify --data-dir ~/Downloads/DOTAv1.5 --format yolo-obb --push-to-hub benjamintli/dota-v1.5 hubify --data-dir ~/Downloads/DOTAv1.5 --format yolo --push-to-hub benjamintli/dota-v1.5 ``` Or run directly with Python (from the virtual environment): ```bash source .venv/bin/activate python -m src.main --data-dir /path/to/images ``` ## Expected Directory Structure * For coco: ``` data-dir/ ├── train/ │ ├── instances*.json (auto-detected) │ └── *.jpg (images) ├── validation/ │ ├── instances*.json (auto-detected) │ └── *.jpg (images) └── test/ (optional) ├── instances*.json └── *.jpg ``` ## Output The tool generates `metadata.jsonl` files in each split directory: ``` data-dir/ ├── train/ │ └── metadata.jsonl └── validation/ └── metadata.jsonl ``` Each line in `metadata.jsonl` contains: ```json { "file_name": "image.jpg", "objects": { "bbox": [[x, y, width, height], ...], "category": [0, 1, ...] } } ``` ## Options - `--data-dir`: Root directory containing train/validation/test subdirectories (required) - `--train-annotations`: Path to training annotations JSON (optional) - `--validation-annotations`: Path to validation annotations JSON (optional) - `--test-annotations`: Path to test annotations JSON (optional) - `--visualize`: Generate sample visualization images with bounding boxes - `--push-to-hub`: Push dataset to HuggingFace Hub (format: `username/dataset-name`) - `--token`: HuggingFace API token (optional, defaults to `HF_TOKEN` env var or `huggingface-cli login`) ### Authentication for Hub Push When using `--push-to-hub`, the tool looks for your HuggingFace token in this order: 1. `--token YOUR_TOKEN` (CLI argument) 2. `HF_TOKEN` environment variable 3. Token from `huggingface-cli login` If no token is found, you'll get a helpful error message with instructions.
text/markdown
null
null
null
null
null
null
[]
[]
null
null
>=3.12
[]
[]
[]
[ "datasets>=4.4.2", "huggingface-hub>=1.2.3", "pillow>=12.1.0", "pyyaml>=6.0", "rich>=13.9.4", "ruff>=0.14.10", "ruff>=0.14.10; extra == \"dev\"" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:26:25.030625
hubify_dataset-0.1.0.tar.gz
15,884
36/6b/791412cf9fd119abdfab5bf521c7aff06570339938633c68aae71f8b74c7/hubify_dataset-0.1.0.tar.gz
source
sdist
null
false
9968c6d0664e472e75abb3948dca45f6
bf9afb8c2d18af30260185aa09264d9a3e11ff0b548b9dbcdc1b2a1183400120
366b791412cf9fd119abdfab5bf521c7aff06570339938633c68aae71f8b74c7
null
[ "LICENSE" ]
2.4
plato-sdk-v2
2.0.52
Python SDK for the Plato API
# Plato Python SDK Python SDK for the Plato platform. Uses [Harbor](https://harborframework.com) for agent execution. ## Installation ```bash pip install plato-sdk-v2 # For agent functionality (requires Python 3.12+) pip install 'plato-sdk-v2[agents]' ``` Or with uv: ```bash uv add plato-sdk-v2 uv add 'plato-sdk-v2[agents]' # for agent support ``` ## Configuration Create a `.env` file in your project root: ```bash PLATO_API_KEY=your-api-key PLATO_BASE_URL=https://plato.so # optional, defaults to https://plato.so ``` Or set environment variables directly: ```bash export PLATO_API_KEY=your-api-key ``` ## Agents The SDK uses Harbor's agent framework. All agents are `BaseInstalledAgent` subclasses that run in containers. ### Available Agents **Harbor built-in agents** (code agents): | Agent | Description | |-------|-------------| | `claude-code` | Claude Code CLI | | `openhands` | OpenHands/All Hands AI | | `codex` | OpenAI Codex CLI | | `aider` | Aider pair programming | | `gemini-cli` | Google Gemini CLI | | `goose` | Block Goose | | `swe-agent` | SWE-agent | | `mini-swe-agent` | Mini SWE-agent | | `cline-cli` | Cline CLI | | `cursor-cli` | Cursor CLI | | `opencode` | OpenCode | | `qwen-coder` | Qwen Coder | **Plato custom agents** (browser/automation): | Agent | Description | |-------|-------------| | `computer-use` | Browser automation (install: `pip install plato-agent-computer-use`) | ### Python Usage ```python from plato.agents import ClaudeCode, OpenHands, AgentFactory, AgentName from pathlib import Path # Option 1: Use AgentFactory agent = AgentFactory.create_agent_from_name( AgentName.CLAUDE_CODE, logs_dir=Path("./logs"), model_name="anthropic/claude-sonnet-4", ) # Option 2: Import agent class directly agent = ClaudeCode( logs_dir=Path("./logs"), model_name="anthropic/claude-sonnet-4", ) # Option 3: Create custom BaseInstalledAgent from plato.agents import BaseInstalledAgent ``` ### CLI Usage ```bash # Run an agent plato agent run -a claude-code -m anthropic/claude-sonnet-4 -d swe-bench-lite # List available agents plato agent list # Get agent config schema plato agent schema claude-code # Publish custom agent to Plato PyPI plato agent publish ./my-agent ``` ### Agent Schemas Get configuration schemas for any agent: ```python from plato.agents import get_agent_schema, AGENT_SCHEMAS # Get schema for specific agent schema = get_agent_schema("claude-code") print(schema) # List all available schemas print(list(AGENT_SCHEMAS.keys())) ``` ### Custom Agents Create a custom agent by extending `BaseInstalledAgent`: ```python from harbor.agents.installed.base import BaseInstalledAgent, ExecInput from pathlib import Path class MyAgent(BaseInstalledAgent): @staticmethod def name() -> str: return "my-agent" @property def _install_agent_template_path(self) -> Path: return Path(__file__).parent / "install.sh.j2" def create_run_agent_commands(self, instruction: str) -> list[ExecInput]: return [ExecInput(command=f"my-agent --task '{instruction}'")] ``` Publish to Plato PyPI: ```bash plato agent publish ./my-agent-package ``` --- ## Sessions & Environments ### Flow 1: Create Session from Environments Use this when you want to spin up environments for development, testing, or custom automation. ```python import asyncio from plato.v2 import AsyncPlato, Env async def main(): plato = AsyncPlato() # Create session with one or more environments # (heartbeat starts automatically to keep session alive) session = await plato.sessions.create( envs=[ Env.simulator("gitea", dataset="blank", alias="gitea"), Env.simulator("kanboard", alias="kanboard"), ], timeout=600, ) # Reset environments to initial state await session.reset() # Get public URLs for browser access public_urls = await session.get_public_url() for alias, url in public_urls.items(): print(f"{alias}: {url}") # Get state mutations from all environments state = await session.get_state() print(state) # Cleanup await session.close() await plato.close() asyncio.run(main()) ``` ### Flow 2: Create Session from Task Use this when running evaluations against predefined tasks. This flow includes task evaluation at the end. ```python import asyncio from plato.v2 import AsyncPlato async def main(): plato = AsyncPlato() # Create session from task ID session = await plato.sessions.create(task=123, timeout=600) # Reset environments to initial state await session.reset() # Get public URLs for browser access public_urls = await session.get_public_url() for alias, url in public_urls.items(): print(f"{alias}: {url}") # Evaluate task completion evaluation = await session.evaluate() print(f"Task completed: {evaluation}") # Cleanup await session.close() await plato.close() asyncio.run(main()) ``` ## Environment Configuration Two ways to specify environments: ```python from plato.v2 import Env # 1. From simulator (most common) Env.simulator("gitea") # default tag Env.simulator("gitea", tag="staging") # specific tag Env.simulator("gitea", dataset="blank") # specific dataset Env.simulator("gitea", alias="my-git") # custom alias # 2. From artifact ID Env.artifact("artifact-abc123") Env.artifact("artifact-abc123", alias="my-env") ``` ## Per-Environment Operations Access individual environments within a session: ```python # Get all environments for env in session.envs: print(f"{env.alias}: {env.job_id}") # Get specific environment by alias gitea = session.get_env("gitea") if gitea: # Execute shell command result = await gitea.execute("whoami", timeout=30) print(result) # Get state for this environment only state = await gitea.get_state() # Reset this environment only await gitea.reset() ``` ## Sync Client A synchronous client is also available: ```python from plato.v2 import Plato, Env plato = Plato() session = plato.sessions.create( envs=[Env.simulator("gitea", alias="gitea")], timeout=600, ) session.reset() public_urls = session.get_public_url() state = session.get_state() session.close() plato.close() ``` ## Architecture ``` plato/ ├── agents/ # Harbor agent re-exports + schemas ├── sims/ # Simulator clients (Spree, Firefly, etc.) ├── world/ # World/environment abstractions ├── v1/ # Legacy SDK + CLI └── v2/ # New API client ``` ## Documentation - [Generating Simulator SDKs](docs/GENERATING_SIM_SDKS.md) - How to create API clients for simulators - [Building Simulators](BUILDING_SIMS.md) - Internal docs for snapshotting simulators ## License MIT
text/markdown
null
Plato <support@plato.so>
null
null
null
null
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Typing :: Typed" ]
[]
null
null
<3.14,>=3.10
[]
[]
[]
[ "aiohttp>=3.8.0", "cryptography>=43.0.0", "datamodel-code-generator>=0.43.0", "email-validator>=2.0.0", "google-genai>=1.0.0", "httpx>=0.25.0", "jinja2>=3.1.0", "openapi-pydantic>=0.5.1", "pydantic-settings>=2.12.0", "pydantic>=2.0.0", "python-dotenv>=1.2.1", "pyyaml>=6.0", "requests>=2.32.5...
[]
[]
[]
[]
uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
2026-01-16T04:26:37.655712
plato_sdk_v2-2.0.52.tar.gz
687,499
14/d7/987522fea9dcac38edf75d8af10e29ce18e316ca5f3629594c187afc04a4/plato_sdk_v2-2.0.52.tar.gz
source
sdist
null
false
b436921f86555c92255eb317a3f6c2d9
048ad07c6aa56a053acd346af921cc81376d7f3839a209d62f48899ad19604a0
14d7987522fea9dcac38edf75d8af10e29ce18e316ca5f3629594c187afc04a4
MIT
[]
2.4
plato-sdk-v2
2.0.52
Python SDK for the Plato API
# Plato Python SDK Python SDK for the Plato platform. Uses [Harbor](https://harborframework.com) for agent execution. ## Installation ```bash pip install plato-sdk-v2 # For agent functionality (requires Python 3.12+) pip install 'plato-sdk-v2[agents]' ``` Or with uv: ```bash uv add plato-sdk-v2 uv add 'plato-sdk-v2[agents]' # for agent support ``` ## Configuration Create a `.env` file in your project root: ```bash PLATO_API_KEY=your-api-key PLATO_BASE_URL=https://plato.so # optional, defaults to https://plato.so ``` Or set environment variables directly: ```bash export PLATO_API_KEY=your-api-key ``` ## Agents The SDK uses Harbor's agent framework. All agents are `BaseInstalledAgent` subclasses that run in containers. ### Available Agents **Harbor built-in agents** (code agents): | Agent | Description | |-------|-------------| | `claude-code` | Claude Code CLI | | `openhands` | OpenHands/All Hands AI | | `codex` | OpenAI Codex CLI | | `aider` | Aider pair programming | | `gemini-cli` | Google Gemini CLI | | `goose` | Block Goose | | `swe-agent` | SWE-agent | | `mini-swe-agent` | Mini SWE-agent | | `cline-cli` | Cline CLI | | `cursor-cli` | Cursor CLI | | `opencode` | OpenCode | | `qwen-coder` | Qwen Coder | **Plato custom agents** (browser/automation): | Agent | Description | |-------|-------------| | `computer-use` | Browser automation (install: `pip install plato-agent-computer-use`) | ### Python Usage ```python from plato.agents import ClaudeCode, OpenHands, AgentFactory, AgentName from pathlib import Path # Option 1: Use AgentFactory agent = AgentFactory.create_agent_from_name( AgentName.CLAUDE_CODE, logs_dir=Path("./logs"), model_name="anthropic/claude-sonnet-4", ) # Option 2: Import agent class directly agent = ClaudeCode( logs_dir=Path("./logs"), model_name="anthropic/claude-sonnet-4", ) # Option 3: Create custom BaseInstalledAgent from plato.agents import BaseInstalledAgent ``` ### CLI Usage ```bash # Run an agent plato agent run -a claude-code -m anthropic/claude-sonnet-4 -d swe-bench-lite # List available agents plato agent list # Get agent config schema plato agent schema claude-code # Publish custom agent to Plato PyPI plato agent publish ./my-agent ``` ### Agent Schemas Get configuration schemas for any agent: ```python from plato.agents import get_agent_schema, AGENT_SCHEMAS # Get schema for specific agent schema = get_agent_schema("claude-code") print(schema) # List all available schemas print(list(AGENT_SCHEMAS.keys())) ``` ### Custom Agents Create a custom agent by extending `BaseInstalledAgent`: ```python from harbor.agents.installed.base import BaseInstalledAgent, ExecInput from pathlib import Path class MyAgent(BaseInstalledAgent): @staticmethod def name() -> str: return "my-agent" @property def _install_agent_template_path(self) -> Path: return Path(__file__).parent / "install.sh.j2" def create_run_agent_commands(self, instruction: str) -> list[ExecInput]: return [ExecInput(command=f"my-agent --task '{instruction}'")] ``` Publish to Plato PyPI: ```bash plato agent publish ./my-agent-package ``` --- ## Sessions & Environments ### Flow 1: Create Session from Environments Use this when you want to spin up environments for development, testing, or custom automation. ```python import asyncio from plato.v2 import AsyncPlato, Env async def main(): plato = AsyncPlato() # Create session with one or more environments # (heartbeat starts automatically to keep session alive) session = await plato.sessions.create( envs=[ Env.simulator("gitea", dataset="blank", alias="gitea"), Env.simulator("kanboard", alias="kanboard"), ], timeout=600, ) # Reset environments to initial state await session.reset() # Get public URLs for browser access public_urls = await session.get_public_url() for alias, url in public_urls.items(): print(f"{alias}: {url}") # Get state mutations from all environments state = await session.get_state() print(state) # Cleanup await session.close() await plato.close() asyncio.run(main()) ``` ### Flow 2: Create Session from Task Use this when running evaluations against predefined tasks. This flow includes task evaluation at the end. ```python import asyncio from plato.v2 import AsyncPlato async def main(): plato = AsyncPlato() # Create session from task ID session = await plato.sessions.create(task=123, timeout=600) # Reset environments to initial state await session.reset() # Get public URLs for browser access public_urls = await session.get_public_url() for alias, url in public_urls.items(): print(f"{alias}: {url}") # Evaluate task completion evaluation = await session.evaluate() print(f"Task completed: {evaluation}") # Cleanup await session.close() await plato.close() asyncio.run(main()) ``` ## Environment Configuration Two ways to specify environments: ```python from plato.v2 import Env # 1. From simulator (most common) Env.simulator("gitea") # default tag Env.simulator("gitea", tag="staging") # specific tag Env.simulator("gitea", dataset="blank") # specific dataset Env.simulator("gitea", alias="my-git") # custom alias # 2. From artifact ID Env.artifact("artifact-abc123") Env.artifact("artifact-abc123", alias="my-env") ``` ## Per-Environment Operations Access individual environments within a session: ```python # Get all environments for env in session.envs: print(f"{env.alias}: {env.job_id}") # Get specific environment by alias gitea = session.get_env("gitea") if gitea: # Execute shell command result = await gitea.execute("whoami", timeout=30) print(result) # Get state for this environment only state = await gitea.get_state() # Reset this environment only await gitea.reset() ``` ## Sync Client A synchronous client is also available: ```python from plato.v2 import Plato, Env plato = Plato() session = plato.sessions.create( envs=[Env.simulator("gitea", alias="gitea")], timeout=600, ) session.reset() public_urls = session.get_public_url() state = session.get_state() session.close() plato.close() ``` ## Architecture ``` plato/ ├── agents/ # Harbor agent re-exports + schemas ├── sims/ # Simulator clients (Spree, Firefly, etc.) ├── world/ # World/environment abstractions ├── v1/ # Legacy SDK + CLI └── v2/ # New API client ``` ## Documentation - [Generating Simulator SDKs](docs/GENERATING_SIM_SDKS.md) - How to create API clients for simulators - [Building Simulators](BUILDING_SIMS.md) - Internal docs for snapshotting simulators ## License MIT
text/markdown
null
Plato <support@plato.so>
null
null
null
null
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Typing :: Typed" ]
[]
null
null
<3.14,>=3.10
[]
[]
[]
[ "aiohttp>=3.8.0", "cryptography>=43.0.0", "datamodel-code-generator>=0.43.0", "email-validator>=2.0.0", "google-genai>=1.0.0", "httpx>=0.25.0", "jinja2>=3.1.0", "openapi-pydantic>=0.5.1", "pydantic-settings>=2.12.0", "pydantic>=2.0.0", "python-dotenv>=1.2.1", "pyyaml>=6.0", "requests>=2.32.5...
[]
[]
[]
[]
uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
2026-01-16T04:26:39.625010
plato_sdk_v2-2.0.52-py3-none-any.whl
632,936
71/98/f46d6a54e3278620bd788212d4efc6489048afb33060e10c9a327714de5d/plato_sdk_v2-2.0.52-py3-none-any.whl
py3
bdist_wheel
null
false
2fc0bd71dea0e3b7e145ed131d7ce647
8be6b3589e3455e8df0d4b7afbc84b3141ea78bbc1a268eb15f5b8ef32a7dd32
7198f46d6a54e3278620bd788212d4efc6489048afb33060e10c9a327714de5d
MIT
[]
2.4
keras-nightly
3.14.0.dev2026011604
Multi-backend Keras
# Keras 3: Deep Learning for Humans Keras 3 is a multi-backend deep learning framework, with support for JAX, TensorFlow, PyTorch, and OpenVINO (for inference-only). Effortlessly build and train models for computer vision, natural language processing, audio processing, timeseries forecasting, recommender systems, etc. - **Accelerated model development**: Ship deep learning solutions faster thanks to the high-level UX of Keras and the availability of easy-to-debug runtimes like PyTorch or JAX eager execution. - **State-of-the-art performance**: By picking the backend that is the fastest for your model architecture (often JAX!), leverage speedups ranging from 20% to 350% compared to other frameworks. [Benchmark here](https://keras.io/getting_started/benchmarks/). - **Datacenter-scale training**: Scale confidently from your laptop to large clusters of GPUs or TPUs. Join nearly three million developers, from burgeoning startups to global enterprises, in harnessing the power of Keras 3. ## Installation ### Install with pip Keras 3 is available on PyPI as `keras`. Note that Keras 2 remains available as the `tf-keras` package. 1. Install `keras`: ``` pip install keras --upgrade ``` 2. Install backend package(s). To use `keras`, you should also install the backend of choice: `tensorflow`, `jax`, or `torch`. Additionally, The `openvino` backend is available with support for model inference only. ### Local installation #### Minimal installation Keras 3 is compatible with Linux and macOS systems. For Windows users, we recommend using WSL2 to run Keras. To install a local development version: 1. Install dependencies: ``` pip install -r requirements.txt ``` 2. Run installation command from the root directory. ``` python pip_build.py --install ``` 3. Run API generation script when creating PRs that update `keras_export` public APIs: ``` ./shell/api_gen.sh ``` ## Backend Compatibility Table The following table lists the minimum supported versions of each backend for the latest stable release of Keras (v3.x): | Backend | Minimum Supported Version | |------------|---------------------------| | TensorFlow | 2.16.1 | | JAX | 0.4.20 | | PyTorch | 2.1.0 | | OpenVINO | 2025.3.0 | #### Adding GPU support The `requirements.txt` file will install a CPU-only version of TensorFlow, JAX, and PyTorch. For GPU support, we also provide a separate `requirements-{backend}-cuda.txt` for TensorFlow, JAX, and PyTorch. These install all CUDA dependencies via `pip` and expect a NVIDIA driver to be pre-installed. We recommend a clean Python environment for each backend to avoid CUDA version mismatches. As an example, here is how to create a JAX GPU environment with `conda`: ```shell conda create -y -n keras-jax python=3.10 conda activate keras-jax pip install -r requirements-jax-cuda.txt python pip_build.py --install ``` ## Configuring your backend You can export the environment variable `KERAS_BACKEND` or you can edit your local config file at `~/.keras/keras.json` to configure your backend. Available backend options are: `"tensorflow"`, `"jax"`, `"torch"`, `"openvino"`. Example: ``` export KERAS_BACKEND="jax" ``` In Colab, you can do: ```python import os os.environ["KERAS_BACKEND"] = "jax" import keras ``` **Note:** The backend must be configured before importing `keras`, and the backend cannot be changed after the package has been imported. **Note:** The OpenVINO backend is an inference-only backend, meaning it is designed only for running model predictions using `model.predict()` method. ## Backwards compatibility Keras 3 is intended to work as a drop-in replacement for `tf.keras` (when using the TensorFlow backend). Just take your existing `tf.keras` code, make sure that your calls to `model.save()` are using the up-to-date `.keras` format, and you're done. If your `tf.keras` model does not include custom components, you can start running it on top of JAX or PyTorch immediately. If it does include custom components (e.g. custom layers or a custom `train_step()`), it is usually possible to convert it to a backend-agnostic implementation in just a few minutes. In addition, Keras models can consume datasets in any format, regardless of the backend you're using: you can train your models with your existing `tf.data.Dataset` pipelines or PyTorch `DataLoaders`. ## Why use Keras 3? - Run your high-level Keras workflows on top of any framework -- benefiting at will from the advantages of each framework, e.g. the scalability and performance of JAX or the production ecosystem options of TensorFlow. - Write custom components (e.g. layers, models, metrics) that you can use in low-level workflows in any framework. - You can take a Keras model and train it in a training loop written from scratch in native TF, JAX, or PyTorch. - You can take a Keras model and use it as part of a PyTorch-native `Module` or as part of a JAX-native model function. - Make your ML code future-proof by avoiding framework lock-in. - As a PyTorch user: get access to power and usability of Keras, at last! - As a JAX user: get access to a fully-featured, battle-tested, well-documented modeling and training library. Read more in the [Keras 3 release announcement](https://keras.io/keras_3/).
text/markdown
null
Keras team <keras-users@googlegroups.com>
null
null
Apache License 2.0
null
[ "Development Status :: 4 - Beta", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3 :: Only", "Operating System :: Unix", "Operating System :: MacOS", "Intended Audience :: Science/Research", ...
[]
null
null
>=3.11
[]
[]
[]
[ "absl-py", "numpy", "rich", "namex", "h5py", "optree", "ml-dtypes", "packaging" ]
[]
[]
[]
[ "Home, https://keras.io/", "Repository, https://github.com/keras-team/keras" ]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:26:41.069999
keras_nightly-3.14.0.dev2026011604-py3-none-any.whl
1,528,839
47/d1/aa12bd5af539231caf2cbc16c61bcf03a4801c2df9c2b80578e40b312d9f/keras_nightly-3.14.0.dev2026011604-py3-none-any.whl
py3
bdist_wheel
null
false
b27747d5846933feba5ca0bc52f397e1
a208aa2d351e41786a433b9361ea4d7c202e174ed27d8fba67d64d34fb44c17b
47d1aa12bd5af539231caf2cbc16c61bcf03a4801c2df9c2b80578e40b312d9f
null
[]
2.4
keras-nightly
3.14.0.dev2026011604
Multi-backend Keras
# Keras 3: Deep Learning for Humans Keras 3 is a multi-backend deep learning framework, with support for JAX, TensorFlow, PyTorch, and OpenVINO (for inference-only). Effortlessly build and train models for computer vision, natural language processing, audio processing, timeseries forecasting, recommender systems, etc. - **Accelerated model development**: Ship deep learning solutions faster thanks to the high-level UX of Keras and the availability of easy-to-debug runtimes like PyTorch or JAX eager execution. - **State-of-the-art performance**: By picking the backend that is the fastest for your model architecture (often JAX!), leverage speedups ranging from 20% to 350% compared to other frameworks. [Benchmark here](https://keras.io/getting_started/benchmarks/). - **Datacenter-scale training**: Scale confidently from your laptop to large clusters of GPUs or TPUs. Join nearly three million developers, from burgeoning startups to global enterprises, in harnessing the power of Keras 3. ## Installation ### Install with pip Keras 3 is available on PyPI as `keras`. Note that Keras 2 remains available as the `tf-keras` package. 1. Install `keras`: ``` pip install keras --upgrade ``` 2. Install backend package(s). To use `keras`, you should also install the backend of choice: `tensorflow`, `jax`, or `torch`. Additionally, The `openvino` backend is available with support for model inference only. ### Local installation #### Minimal installation Keras 3 is compatible with Linux and macOS systems. For Windows users, we recommend using WSL2 to run Keras. To install a local development version: 1. Install dependencies: ``` pip install -r requirements.txt ``` 2. Run installation command from the root directory. ``` python pip_build.py --install ``` 3. Run API generation script when creating PRs that update `keras_export` public APIs: ``` ./shell/api_gen.sh ``` ## Backend Compatibility Table The following table lists the minimum supported versions of each backend for the latest stable release of Keras (v3.x): | Backend | Minimum Supported Version | |------------|---------------------------| | TensorFlow | 2.16.1 | | JAX | 0.4.20 | | PyTorch | 2.1.0 | | OpenVINO | 2025.3.0 | #### Adding GPU support The `requirements.txt` file will install a CPU-only version of TensorFlow, JAX, and PyTorch. For GPU support, we also provide a separate `requirements-{backend}-cuda.txt` for TensorFlow, JAX, and PyTorch. These install all CUDA dependencies via `pip` and expect a NVIDIA driver to be pre-installed. We recommend a clean Python environment for each backend to avoid CUDA version mismatches. As an example, here is how to create a JAX GPU environment with `conda`: ```shell conda create -y -n keras-jax python=3.10 conda activate keras-jax pip install -r requirements-jax-cuda.txt python pip_build.py --install ``` ## Configuring your backend You can export the environment variable `KERAS_BACKEND` or you can edit your local config file at `~/.keras/keras.json` to configure your backend. Available backend options are: `"tensorflow"`, `"jax"`, `"torch"`, `"openvino"`. Example: ``` export KERAS_BACKEND="jax" ``` In Colab, you can do: ```python import os os.environ["KERAS_BACKEND"] = "jax" import keras ``` **Note:** The backend must be configured before importing `keras`, and the backend cannot be changed after the package has been imported. **Note:** The OpenVINO backend is an inference-only backend, meaning it is designed only for running model predictions using `model.predict()` method. ## Backwards compatibility Keras 3 is intended to work as a drop-in replacement for `tf.keras` (when using the TensorFlow backend). Just take your existing `tf.keras` code, make sure that your calls to `model.save()` are using the up-to-date `.keras` format, and you're done. If your `tf.keras` model does not include custom components, you can start running it on top of JAX or PyTorch immediately. If it does include custom components (e.g. custom layers or a custom `train_step()`), it is usually possible to convert it to a backend-agnostic implementation in just a few minutes. In addition, Keras models can consume datasets in any format, regardless of the backend you're using: you can train your models with your existing `tf.data.Dataset` pipelines or PyTorch `DataLoaders`. ## Why use Keras 3? - Run your high-level Keras workflows on top of any framework -- benefiting at will from the advantages of each framework, e.g. the scalability and performance of JAX or the production ecosystem options of TensorFlow. - Write custom components (e.g. layers, models, metrics) that you can use in low-level workflows in any framework. - You can take a Keras model and train it in a training loop written from scratch in native TF, JAX, or PyTorch. - You can take a Keras model and use it as part of a PyTorch-native `Module` or as part of a JAX-native model function. - Make your ML code future-proof by avoiding framework lock-in. - As a PyTorch user: get access to power and usability of Keras, at last! - As a JAX user: get access to a fully-featured, battle-tested, well-documented modeling and training library. Read more in the [Keras 3 release announcement](https://keras.io/keras_3/).
text/markdown
null
Keras team <keras-users@googlegroups.com>
null
null
Apache License 2.0
null
[ "Development Status :: 4 - Beta", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3 :: Only", "Operating System :: Unix", "Operating System :: MacOS", "Intended Audience :: Science/Research", ...
[]
null
null
>=3.11
[]
[]
[]
[ "absl-py", "numpy", "rich", "namex", "h5py", "optree", "ml-dtypes", "packaging" ]
[]
[]
[]
[ "Home, https://keras.io/", "Repository, https://github.com/keras-team/keras" ]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:26:43.111910
keras_nightly-3.14.0.dev2026011604.tar.gz
1,166,754
0f/b7/cfcf9c3e85456d83b502ca5f0a083d15e13e317bb8ad2cd6c5649f51e05b/keras_nightly-3.14.0.dev2026011604.tar.gz
source
sdist
null
false
c76a172f76399646d5683502c8fe0fec
13a12f052f099663d2a30246e2d8b68d8c95afe74eb51af53316d1ed86c8b719
0fb7cfcf9c3e85456d83b502ca5f0a083d15e13e317bb8ad2cd6c5649f51e05b
null
[]
2.4
evan-tools
0.2.2
It's a set of tools for Evan's development.
null
null
null
Evan <evanstonlaw555@gmail.com>, Evan <kaluoshilong@qq.com>
null
null
null
null
[]
[]
null
null
>=3.13
[]
[]
[]
[ "humanize>=4.14.0", "pydash>=8.0.5", "pyyaml>=6.0.3", "typer>=0.20.0" ]
[]
[]
[]
[]
uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
2026-01-16T04:26:51.861124
evan_tools-0.2.2.tar.gz
15,732
eb/67/e40bbafce6aa9e45e5eb8352d506bb9e5461f20aae23936103772fcfb875/evan_tools-0.2.2.tar.gz
source
sdist
null
false
37c5b86202a56c3ec19a06d0e31f6678
9396832e30f212e0d795c3400c2bff5c6116f2f58c94d686483f52fe0915bd59
eb67e40bbafce6aa9e45e5eb8352d506bb9e5461f20aae23936103772fcfb875
null
[]
2.4
evan-tools
0.2.2
It's a set of tools for Evan's development.
null
null
null
Evan <evanstonlaw555@gmail.com>, Evan <kaluoshilong@qq.com>
null
null
null
null
[]
[]
null
null
>=3.13
[]
[]
[]
[ "humanize>=4.14.0", "pydash>=8.0.5", "pyyaml>=6.0.3", "typer>=0.20.0" ]
[]
[]
[]
[]
uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
2026-01-16T04:26:52.850691
evan_tools-0.2.2-py3-none-any.whl
13,581
4d/cc/1c4ccc1a9b3d3fff5a3b12676d4314cd71350e12224fce7a3b7dc0ccad2b/evan_tools-0.2.2-py3-none-any.whl
py3
bdist_wheel
null
false
458dd5bc7a4d07ce7e5df3847a30d0bd
7d6e02f9f3a3b7b4f335a44503b3e5aeeae5d70f0b02bd253e22dc629e62e202
4dcc1c4ccc1a9b3d3fff5a3b12676d4314cd71350e12224fce7a3b7dc0ccad2b
null
[]
2.4
ucdmcmc
1.4
MCMC fitting code for low temperature atmosphere spectra
# UCDMCMC Markov Chain Monte Carlo (MCMC) fitting code for low-temperature stars, brown dwarfs ande extrasolar planet spectra, tuned particularly to the near-infrared. ## INSTALLATION NOTES `ucdmcmc` can be installed from pip: pip install ucdmcmc or from git: git clone cd ucdmcmc python -m setup.py install It is recommended that you install in a conda environment to ensure the dependencies do not conflict with your own installation conda create -n ucdmcmc python=3.13 conda activate ucdmcmc pip install ucdmcmc A check that this worked is that you can import `ucdmcmc` into python/jupyter noteobook, and that the `ucdmcmc.MODEL_FOLDER` points to the models folder that was downloaded `ucdmcmc` uses the following extenal packages: * `astropy`: https://www.astropy.org/ * `astroquery`: https://astroquery.readthedocs.io/en/latest/ * `corner`: https://corner.readthedocs.io/en/latest/ * `emcee`: https://emcee.readthedocs.io/en/stable/ * `matplotlib`: https://matplotlib.org/ * `numpy<2.0`: https://numpy.org/ * `pandas`: https://pandas.pydata.org/ * `(py)tables`: https://www.pytables.org/ * `requests`: https://requests.readthedocs.io/en/latest/ * `scipy`: https://scipy.org/ * `spectres`: https://spectres.readthedocs.io/en/latest/ * `statsmodels`: https://www.statsmodels.org/stable/index.html * `tqdm`: https://tqdm.github.io/ ### Optionally install SPLAT To generate new model sets using the built-in `generateModels()` function, you will need to install `SPLAT` (note: this is not necessary for the other functionality in this code). `SPLAT` is not automatically installed on setup. The instructions are essentially the same: git clone https://github.com/aburgasser/splat.git cd splat python -m pip install . See https://github.com/aburgasser/splat for additional instructions ## Models `ucdmcmc` comes with a starter set of models that play nicely with the code. An extended set can be downloaded from https://spexarchive.coolstarlab.ucsd.edu/ucdmcmc/. These should be placed in the folder `.ucdmcmc_models` in your home directory (i.e., `/home/adam/.ucdmcmc_models`). If it doesn't already exist, this directory will be created on the first call to `ucdmcmcm`. In addition, models that exist on this website and not present in this folder will be downloaded directly when `getModelSet()`` is called. You can also generate your own set of models using the `generateModels()` function (see note above). ## Spectra `ucdmcmc` comes with a starter set of spectra for the following instruments: * EUCLID: TBD * NIR: TRAPPIST1 spectrum from Davoudi et al. (2024) https://ui.adsabs.harvard.edu/abs/2024ApJ...970L...4D/abstract * SPEX-PRISM: 2MASS J0559-1404 from Burgasser et al. (2006) https://ui.adsabs.harvard.edu/abs/2006ApJ...637.1067B/abstract * JWST-NIRSPEC-PRISM: UNCOVER 33436 from Burgasser et al. (2024) https://ui.adsabs.harvard.edu/abs/2024ApJ...962..177B/abstract * JWST-NIRSPEC-G395H: TBD * JWST-MIRI-LRS: TBD * JWST-NIRSPEC-MIRI: Combined NIRSpec/PRISM and MIRI/LRS of SDSS J1624+0029 from Beiler et al. (2024) https://ui.adsabs.harvard.edu/abs/2024arXiv240708518B/abstract User spectra can be read in using `ucdmcmc.Spectrum("filename")`. Files can be `.fits`, `.csv`, `.txt` (space-delimited), or `.tsv` (tab-delimited), and should have wavelength, flux, and uncertainty arrays. You can also read in these files separately and create a Spectrum object using the call `ucdmcmc.Spectrum(wave=[wave array,flux=[flux array],noise=[uncertainty array])`. See the docstring for `ucdmcmc.Spectrum` for further details. ## Usage [TBD examples] ## Opacities [TBD] ## Citing the code If you use this code in your research, publications, or presentatinos, please include the following citation: Adam Burgasser. (2025). aburgasser/ucdmcmc (vXXX). Zenodo. https://doi.org/10.5281/zenodo.16923762 or in bibtex: @software{adam_burgasser_2025_16921711, author = {Adam Burgasser}, doi = {10.5281/zenodo.16921711}, month = aug, publisher = {Zenodo}, title = {aburgasser/ucdmcmc}, url = {https://doi.org/10.5281/zenodo.16921711}, version = {vXXX}, year = 2025, bdsk-url-1 = {https://doi.org/10.5281/zenodo.16921711}} where (vXXX) corresponds to the version used. `ucdmcmc` and its antecedents has been used in the following publications: * Burgasser et al. (2024, ApJ 962, 177): https://ui.adsabs.harvard.edu/abs/2024ApJ...962..177B/abstract * Burgasser et al. (2025, ApJ 982, 79): https://ui.adsabs.harvard.edu/abs/2025ApJ...982...79B/abstract * Lueber & Burgasser (2025, ApJ 988, 31): https://ui.adsabs.harvard.edu/abs/2025ApJ...988...31L/abstract * Burgasser et al. (2025, Science, 390, 697): https://ui.adsabs.harvard.edu/abs/2025Sci...390..697B/abstract * Morrissey et al. (2026, AJ, in press): https://ui.adsabs.harvard.edu/abs/2025arXiv251101167M/abstract Please let me know if you make use of the code so we can include your publication in the list above!
text/markdown
null
Adam Burgasser <aburgasser@ucsd.edu>
null
Adam Burgasser <aburgasser@ucsd.edu>
null
null
[ "Development Status :: 4 - Beta", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13" ]
[]
null
null
>=3.10
[]
[]
[]
[ "astropy", "astroquery", "corner", "emcee", "matplotlib", "numpy<2.0", "pandas", "tables", "requests", "scipy", "spectres", "splat", "statsmodels", "tqdm", "importlib_resources; python_version < \"3.7\"", "pytest; extra == \"test\"", "ruff; extra == \"test\"" ]
[]
[]
[]
[ "Repository, https://github.com/aburgasser/ucdmcmc.git", "Issues, https://github.com/aburgasser/ucdmcmc/issues" ]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:27:37.914155
ucdmcmc-1.4-py3-none-any.whl
45,292,154
f3/88/bf0dfd23263aa2ca71f4ab99487b71572b5d9f758389c69a55c29a82f709/ucdmcmc-1.4-py3-none-any.whl
py3
bdist_wheel
null
false
b87abcfb1110b87a6a33b87b0f7b7841
bcdd5211f5347273a17fcd730244a95c608d840ca415edf4fc072055e5765651
f388bf0dfd23263aa2ca71f4ab99487b71572b5d9f758389c69a55c29a82f709
null
[ "LICENSE" ]
2.4
ucdmcmc
1.4
MCMC fitting code for low temperature atmosphere spectra
# UCDMCMC Markov Chain Monte Carlo (MCMC) fitting code for low-temperature stars, brown dwarfs ande extrasolar planet spectra, tuned particularly to the near-infrared. ## INSTALLATION NOTES `ucdmcmc` can be installed from pip: pip install ucdmcmc or from git: git clone cd ucdmcmc python -m setup.py install It is recommended that you install in a conda environment to ensure the dependencies do not conflict with your own installation conda create -n ucdmcmc python=3.13 conda activate ucdmcmc pip install ucdmcmc A check that this worked is that you can import `ucdmcmc` into python/jupyter noteobook, and that the `ucdmcmc.MODEL_FOLDER` points to the models folder that was downloaded `ucdmcmc` uses the following extenal packages: * `astropy`: https://www.astropy.org/ * `astroquery`: https://astroquery.readthedocs.io/en/latest/ * `corner`: https://corner.readthedocs.io/en/latest/ * `emcee`: https://emcee.readthedocs.io/en/stable/ * `matplotlib`: https://matplotlib.org/ * `numpy<2.0`: https://numpy.org/ * `pandas`: https://pandas.pydata.org/ * `(py)tables`: https://www.pytables.org/ * `requests`: https://requests.readthedocs.io/en/latest/ * `scipy`: https://scipy.org/ * `spectres`: https://spectres.readthedocs.io/en/latest/ * `statsmodels`: https://www.statsmodels.org/stable/index.html * `tqdm`: https://tqdm.github.io/ ### Optionally install SPLAT To generate new model sets using the built-in `generateModels()` function, you will need to install `SPLAT` (note: this is not necessary for the other functionality in this code). `SPLAT` is not automatically installed on setup. The instructions are essentially the same: git clone https://github.com/aburgasser/splat.git cd splat python -m pip install . See https://github.com/aburgasser/splat for additional instructions ## Models `ucdmcmc` comes with a starter set of models that play nicely with the code. An extended set can be downloaded from https://spexarchive.coolstarlab.ucsd.edu/ucdmcmc/. These should be placed in the folder `.ucdmcmc_models` in your home directory (i.e., `/home/adam/.ucdmcmc_models`). If it doesn't already exist, this directory will be created on the first call to `ucdmcmcm`. In addition, models that exist on this website and not present in this folder will be downloaded directly when `getModelSet()`` is called. You can also generate your own set of models using the `generateModels()` function (see note above). ## Spectra `ucdmcmc` comes with a starter set of spectra for the following instruments: * EUCLID: TBD * NIR: TRAPPIST1 spectrum from Davoudi et al. (2024) https://ui.adsabs.harvard.edu/abs/2024ApJ...970L...4D/abstract * SPEX-PRISM: 2MASS J0559-1404 from Burgasser et al. (2006) https://ui.adsabs.harvard.edu/abs/2006ApJ...637.1067B/abstract * JWST-NIRSPEC-PRISM: UNCOVER 33436 from Burgasser et al. (2024) https://ui.adsabs.harvard.edu/abs/2024ApJ...962..177B/abstract * JWST-NIRSPEC-G395H: TBD * JWST-MIRI-LRS: TBD * JWST-NIRSPEC-MIRI: Combined NIRSpec/PRISM and MIRI/LRS of SDSS J1624+0029 from Beiler et al. (2024) https://ui.adsabs.harvard.edu/abs/2024arXiv240708518B/abstract User spectra can be read in using `ucdmcmc.Spectrum("filename")`. Files can be `.fits`, `.csv`, `.txt` (space-delimited), or `.tsv` (tab-delimited), and should have wavelength, flux, and uncertainty arrays. You can also read in these files separately and create a Spectrum object using the call `ucdmcmc.Spectrum(wave=[wave array,flux=[flux array],noise=[uncertainty array])`. See the docstring for `ucdmcmc.Spectrum` for further details. ## Usage [TBD examples] ## Opacities [TBD] ## Citing the code If you use this code in your research, publications, or presentatinos, please include the following citation: Adam Burgasser. (2025). aburgasser/ucdmcmc (vXXX). Zenodo. https://doi.org/10.5281/zenodo.16923762 or in bibtex: @software{adam_burgasser_2025_16921711, author = {Adam Burgasser}, doi = {10.5281/zenodo.16921711}, month = aug, publisher = {Zenodo}, title = {aburgasser/ucdmcmc}, url = {https://doi.org/10.5281/zenodo.16921711}, version = {vXXX}, year = 2025, bdsk-url-1 = {https://doi.org/10.5281/zenodo.16921711}} where (vXXX) corresponds to the version used. `ucdmcmc` and its antecedents has been used in the following publications: * Burgasser et al. (2024, ApJ 962, 177): https://ui.adsabs.harvard.edu/abs/2024ApJ...962..177B/abstract * Burgasser et al. (2025, ApJ 982, 79): https://ui.adsabs.harvard.edu/abs/2025ApJ...982...79B/abstract * Lueber & Burgasser (2025, ApJ 988, 31): https://ui.adsabs.harvard.edu/abs/2025ApJ...988...31L/abstract * Burgasser et al. (2025, Science, 390, 697): https://ui.adsabs.harvard.edu/abs/2025Sci...390..697B/abstract * Morrissey et al. (2026, AJ, in press): https://ui.adsabs.harvard.edu/abs/2025arXiv251101167M/abstract Please let me know if you make use of the code so we can include your publication in the list above!
text/markdown
null
Adam Burgasser <aburgasser@ucsd.edu>
null
Adam Burgasser <aburgasser@ucsd.edu>
null
null
[ "Development Status :: 4 - Beta", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13" ]
[]
null
null
>=3.10
[]
[]
[]
[ "astropy", "astroquery", "corner", "emcee", "matplotlib", "numpy<2.0", "pandas", "tables", "requests", "scipy", "spectres", "splat", "statsmodels", "tqdm", "importlib_resources; python_version < \"3.7\"", "pytest; extra == \"test\"", "ruff; extra == \"test\"" ]
[]
[]
[]
[ "Repository, https://github.com/aburgasser/ucdmcmc.git", "Issues, https://github.com/aburgasser/ucdmcmc/issues" ]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:27:41.716024
ucdmcmc-1.4.tar.gz
46,024,861
bd/51/06a5a741bde0c29cb960fc2e8ab447d2aee8d5c8a9669b5d0ed06b2a0471/ucdmcmc-1.4.tar.gz
source
sdist
null
false
717c4c703347b5e3b521a580a8599947
9970c276dc32e523b181193edd8804dd68ca032b027c68d5462b40ff01eb57dd
bd5106a5a741bde0c29cb960fc2e8ab447d2aee8d5c8a9669b5d0ed06b2a0471
null
[ "LICENSE" ]
2.4
scrapy-impersonate
1.6.2
Scrapy download handler that can impersonate browser fingerprints
# scrapy-impersonate [![version](https://img.shields.io/pypi/v/scrapy-impersonate.svg)](https://pypi.python.org/pypi/scrapy-impersonate) `scrapy-impersonate` is a Scrapy download handler. This project integrates [curl_cffi](https://github.com/yifeikong/curl_cffi) to perform HTTP requests, so it can impersonate browsers' TLS signatures or JA3 fingerprints. ## Installation ``` pip install scrapy-impersonate ``` ## Activation To use this package, replace the default `http` and `https` Download Handlers by updating the [`DOWNLOAD_HANDLERS`](https://docs.scrapy.org/en/latest/topics/settings.html#download-handlers) setting: ```python DOWNLOAD_HANDLERS = { "http": "scrapy_impersonate.ImpersonateDownloadHandler", "https": "scrapy_impersonate.ImpersonateDownloadHandler", } ``` By setting `USER_AGENT = None`, `curl_cffi` will automatically choose the appropriate User-Agent based on the impersonated browser: ```python USER_AGENT = None ``` Also, be sure to [install the asyncio-based Twisted reactor](https://docs.scrapy.org/en/latest/topics/asyncio.html#installing-the-asyncio-reactor) for proper asynchronous execution: ```python TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor" ``` ## Usage Set the `impersonate` [Request.meta](https://docs.scrapy.org/en/latest/topics/request-response.html#scrapy.http.Request.meta) key to download a request using `curl_cffi`: ```python import scrapy class ImpersonateSpider(scrapy.Spider): name = "impersonate_spider" custom_settings = { "TWISTED_REACTOR": "twisted.internet.asyncioreactor.AsyncioSelectorReactor", "USER_AGENT": "", "DOWNLOAD_HANDLERS": { "http": "scrapy_impersonate.ImpersonateDownloadHandler", "https": "scrapy_impersonate.ImpersonateDownloadHandler", }, "DOWNLOADER_MIDDLEWARES": { "scrapy_impersonate.RandomBrowserMiddleware": 1000, }, } def start_requests(self): for _ in range(5): yield scrapy.Request( "https://tls.browserleaks.com/json", dont_filter=True, ) def parse(self, response): # ja3_hash: 98cc085d47985d3cca9ec1415bbbf0d1 (chrome133a) # ja3_hash: 2d692a4485ca2f5f2b10ecb2d2909ad3 (firefox133) # ja3_hash: c11ab92a9db8107e2a0b0486f35b80b9 (chrome124) # ja3_hash: 773906b0efdefa24a7f2b8eb6985bf37 (safari15_5) # ja3_hash: cd08e31494f9531f560d64c695473da9 (chrome99_android) yield {"ja3_hash": response.json()["ja3_hash"]} ``` ### impersonate-args You can pass any necessary [arguments](https://github.com/lexiforest/curl_cffi/blob/38a91f2e7b23d9c9bda1d8085b7e41e33767c768/curl_cffi/requests/session.py#L1189-L1222) to `curl_cffi` through `impersonate_args`. For example: ```python yield scrapy.Request( "https://tls.browserleaks.com/json", dont_filter=True, meta={ "impersonate": browser, "impersonate_args": { "verify": False, "timeout": 10, }, }, ) ``` ## Supported browsers The following browsers can be impersonated | Browser | Version | Build | OS | Name | | --- | --- | --- | --- | --- | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 99 | 99.0.4844.51 | Windows 10 | `chrome99` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 99 | 99.0.4844.73 | Android 12 | `chrome99_android` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 100 | 100.0.4896.75 | Windows 10 | `chrome100` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 101 | 101.0.4951.67 | Windows 10 | `chrome101` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 104 | 104.0.5112.81 | Windows 10 | `chrome104` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 107 | 107.0.5304.107 | Windows 10 | `chrome107` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 110 | 110.0.5481.177 | Windows 10 | `chrome110` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 116 | 116.0.5845.180 | Windows 10 | `chrome116` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 119 | 119.0.6045.199 | macOS Sonoma | `chrome119` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 120 | 120.0.6099.109 | macOS Sonoma | `chrome120` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 123 | 123.0.6312.124 | macOS Sonoma | `chrome123` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 124 | 124.0.6367.60 | macOS Sonoma | `chrome124` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 131 | 131.0.6778.86 | macOS Sonoma | `chrome131` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 131 | 131.0.6778.81 | Android 14 | `chrome131_android` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 133 | 133.0.6943.55 | macOS Sequoia | `chrome133a` | | ![Edge](https://raw.githubusercontent.com/alrra/browser-logos/main/src/edge/edge_24x24.png "Edge") | 99 | 99.0.1150.30 | Windows 10 | `edge99` | | ![Edge](https://raw.githubusercontent.com/alrra/browser-logos/main/src/edge/edge_24x24.png "Edge") | 101 | 101.0.1210.47 | Windows 10 | `edge101` | | ![Safari](https://github.com/alrra/browser-logos/blob/main/src/safari/safari_24x24.png "Safari") | 15.3 | 16612.4.9.1.8 | MacOS Big Sur | `safari15_3` | | ![Safari](https://github.com/alrra/browser-logos/blob/main/src/safari/safari_24x24.png "Safari") | 15.5 | 17613.2.7.1.8 | MacOS Monterey | `safari15_5` | | ![Safari](https://github.com/alrra/browser-logos/blob/main/src/safari/safari_24x24.png "Safari") | 17.0 | unclear | MacOS Sonoma | `safari17_0` | | ![Safari](https://github.com/alrra/browser-logos/blob/main/src/safari/safari_24x24.png "Safari") | 17.2 | unclear | iOS 17.2 | `safari17_2_ios` | | ![Safari](https://github.com/alrra/browser-logos/blob/main/src/safari/safari_24x24.png "Safari") | 18.0 | unclear | MacOS Sequoia | `safari18_0` | | ![Safari](https://github.com/alrra/browser-logos/blob/main/src/safari/safari_24x24.png "Safari") | 18.0 | unclear | iOS 18.0 | `safari18_0_ios` | | ![Firefox](https://github.com/alrra/browser-logos/blob/main/src/firefox/firefox_24x24.png "Firefox") | 133.0 | 133.0.3 | macOS Sonoma | `firefox133` | | ![Firefox](https://github.com/alrra/browser-logos/blob/main/src/firefox/firefox_24x24.png "Firefox") | 135.0 | 135.0.1 | macOS Sonoma | `firefox135` | ## Thanks This project is inspired by the following projects: + [curl_cffi](https://github.com/yifeikong/curl_cffi) - Python binding for curl-impersonate via cffi. A http client that can impersonate browser tls/ja3/http2 fingerprints. + [curl-impersonate](https://github.com/lwthiker/curl-impersonate) - A special build of curl that can impersonate Chrome & Firefox + [scrapy-playwright](https://github.com/scrapy-plugins/scrapy-playwright) - Playwright integration for Scrapy
text/markdown
Jalil SA (jxlil)
null
null
null
MIT
null
[ "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "License :: OSI Approved :: MIT License" ]
[]
https://github.com/jxlil/scrapy-impersonate
null
>=3.8
[]
[]
[]
[ "curl-cffi>=0.13.0", "scrapy>=2.12.0" ]
[]
[]
[]
[]
twine/6.2.0 CPython/3.14.2
2026-01-16T04:27:50.652516
scrapy_impersonate-1.6.2-py3-none-any.whl
7,333
38/6c/9c70917980d6b10e0560849420b7669a33cfbf85d7900f561824b0b1afc2/scrapy_impersonate-1.6.2-py3-none-any.whl
py3
bdist_wheel
null
false
5971d6c0664a7acf724fc153186b11de
97caa429863c217fe3fa0dcd4ca94745cc12082d7b88676c4f9d27b66981635b
386c9c70917980d6b10e0560849420b7669a33cfbf85d7900f561824b0b1afc2
null
[ "LICENSE" ]
2.4
scrapy-impersonate
1.6.2
Scrapy download handler that can impersonate browser fingerprints
# scrapy-impersonate [![version](https://img.shields.io/pypi/v/scrapy-impersonate.svg)](https://pypi.python.org/pypi/scrapy-impersonate) `scrapy-impersonate` is a Scrapy download handler. This project integrates [curl_cffi](https://github.com/yifeikong/curl_cffi) to perform HTTP requests, so it can impersonate browsers' TLS signatures or JA3 fingerprints. ## Installation ``` pip install scrapy-impersonate ``` ## Activation To use this package, replace the default `http` and `https` Download Handlers by updating the [`DOWNLOAD_HANDLERS`](https://docs.scrapy.org/en/latest/topics/settings.html#download-handlers) setting: ```python DOWNLOAD_HANDLERS = { "http": "scrapy_impersonate.ImpersonateDownloadHandler", "https": "scrapy_impersonate.ImpersonateDownloadHandler", } ``` By setting `USER_AGENT = None`, `curl_cffi` will automatically choose the appropriate User-Agent based on the impersonated browser: ```python USER_AGENT = None ``` Also, be sure to [install the asyncio-based Twisted reactor](https://docs.scrapy.org/en/latest/topics/asyncio.html#installing-the-asyncio-reactor) for proper asynchronous execution: ```python TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor" ``` ## Usage Set the `impersonate` [Request.meta](https://docs.scrapy.org/en/latest/topics/request-response.html#scrapy.http.Request.meta) key to download a request using `curl_cffi`: ```python import scrapy class ImpersonateSpider(scrapy.Spider): name = "impersonate_spider" custom_settings = { "TWISTED_REACTOR": "twisted.internet.asyncioreactor.AsyncioSelectorReactor", "USER_AGENT": "", "DOWNLOAD_HANDLERS": { "http": "scrapy_impersonate.ImpersonateDownloadHandler", "https": "scrapy_impersonate.ImpersonateDownloadHandler", }, "DOWNLOADER_MIDDLEWARES": { "scrapy_impersonate.RandomBrowserMiddleware": 1000, }, } def start_requests(self): for _ in range(5): yield scrapy.Request( "https://tls.browserleaks.com/json", dont_filter=True, ) def parse(self, response): # ja3_hash: 98cc085d47985d3cca9ec1415bbbf0d1 (chrome133a) # ja3_hash: 2d692a4485ca2f5f2b10ecb2d2909ad3 (firefox133) # ja3_hash: c11ab92a9db8107e2a0b0486f35b80b9 (chrome124) # ja3_hash: 773906b0efdefa24a7f2b8eb6985bf37 (safari15_5) # ja3_hash: cd08e31494f9531f560d64c695473da9 (chrome99_android) yield {"ja3_hash": response.json()["ja3_hash"]} ``` ### impersonate-args You can pass any necessary [arguments](https://github.com/lexiforest/curl_cffi/blob/38a91f2e7b23d9c9bda1d8085b7e41e33767c768/curl_cffi/requests/session.py#L1189-L1222) to `curl_cffi` through `impersonate_args`. For example: ```python yield scrapy.Request( "https://tls.browserleaks.com/json", dont_filter=True, meta={ "impersonate": browser, "impersonate_args": { "verify": False, "timeout": 10, }, }, ) ``` ## Supported browsers The following browsers can be impersonated | Browser | Version | Build | OS | Name | | --- | --- | --- | --- | --- | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 99 | 99.0.4844.51 | Windows 10 | `chrome99` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 99 | 99.0.4844.73 | Android 12 | `chrome99_android` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 100 | 100.0.4896.75 | Windows 10 | `chrome100` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 101 | 101.0.4951.67 | Windows 10 | `chrome101` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 104 | 104.0.5112.81 | Windows 10 | `chrome104` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 107 | 107.0.5304.107 | Windows 10 | `chrome107` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 110 | 110.0.5481.177 | Windows 10 | `chrome110` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 116 | 116.0.5845.180 | Windows 10 | `chrome116` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 119 | 119.0.6045.199 | macOS Sonoma | `chrome119` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 120 | 120.0.6099.109 | macOS Sonoma | `chrome120` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 123 | 123.0.6312.124 | macOS Sonoma | `chrome123` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 124 | 124.0.6367.60 | macOS Sonoma | `chrome124` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 131 | 131.0.6778.86 | macOS Sonoma | `chrome131` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 131 | 131.0.6778.81 | Android 14 | `chrome131_android` | | ![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/main/src/chrome/chrome_24x24.png "Chrome") | 133 | 133.0.6943.55 | macOS Sequoia | `chrome133a` | | ![Edge](https://raw.githubusercontent.com/alrra/browser-logos/main/src/edge/edge_24x24.png "Edge") | 99 | 99.0.1150.30 | Windows 10 | `edge99` | | ![Edge](https://raw.githubusercontent.com/alrra/browser-logos/main/src/edge/edge_24x24.png "Edge") | 101 | 101.0.1210.47 | Windows 10 | `edge101` | | ![Safari](https://github.com/alrra/browser-logos/blob/main/src/safari/safari_24x24.png "Safari") | 15.3 | 16612.4.9.1.8 | MacOS Big Sur | `safari15_3` | | ![Safari](https://github.com/alrra/browser-logos/blob/main/src/safari/safari_24x24.png "Safari") | 15.5 | 17613.2.7.1.8 | MacOS Monterey | `safari15_5` | | ![Safari](https://github.com/alrra/browser-logos/blob/main/src/safari/safari_24x24.png "Safari") | 17.0 | unclear | MacOS Sonoma | `safari17_0` | | ![Safari](https://github.com/alrra/browser-logos/blob/main/src/safari/safari_24x24.png "Safari") | 17.2 | unclear | iOS 17.2 | `safari17_2_ios` | | ![Safari](https://github.com/alrra/browser-logos/blob/main/src/safari/safari_24x24.png "Safari") | 18.0 | unclear | MacOS Sequoia | `safari18_0` | | ![Safari](https://github.com/alrra/browser-logos/blob/main/src/safari/safari_24x24.png "Safari") | 18.0 | unclear | iOS 18.0 | `safari18_0_ios` | | ![Firefox](https://github.com/alrra/browser-logos/blob/main/src/firefox/firefox_24x24.png "Firefox") | 133.0 | 133.0.3 | macOS Sonoma | `firefox133` | | ![Firefox](https://github.com/alrra/browser-logos/blob/main/src/firefox/firefox_24x24.png "Firefox") | 135.0 | 135.0.1 | macOS Sonoma | `firefox135` | ## Thanks This project is inspired by the following projects: + [curl_cffi](https://github.com/yifeikong/curl_cffi) - Python binding for curl-impersonate via cffi. A http client that can impersonate browser tls/ja3/http2 fingerprints. + [curl-impersonate](https://github.com/lwthiker/curl-impersonate) - A special build of curl that can impersonate Chrome & Firefox + [scrapy-playwright](https://github.com/scrapy-plugins/scrapy-playwright) - Playwright integration for Scrapy
text/markdown
Jalil SA (jxlil)
null
null
null
MIT
null
[ "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "License :: OSI Approved :: MIT License" ]
[]
https://github.com/jxlil/scrapy-impersonate
null
>=3.8
[]
[]
[]
[ "curl-cffi>=0.13.0", "scrapy>=2.12.0" ]
[]
[]
[]
[]
twine/6.2.0 CPython/3.14.2
2026-01-16T04:27:51.461913
scrapy_impersonate-1.6.2.tar.gz
6,707
c4/e3/632135937df9aafc23572bb8dde909654a06969e2f704b8a24afc3570063/scrapy_impersonate-1.6.2.tar.gz
source
sdist
null
false
c22f22a95a249d45d1ec88c8614a8c6c
5e057ae037e09d90b858c31da32597fe35c2324d198e57b86ead5a226fe2bba6
c4e3632135937df9aafc23572bb8dde909654a06969e2f704b8a24afc3570063
null
[ "LICENSE" ]
2.4
boto3-refresh-session
6.2.3
A simple Python package for refreshing the temporary security credentials in a boto3.session.Session object automatically.
<div align="center"> <img src="https://raw.githubusercontent.com/michaelthomasletts/boto3-refresh-session/refs/heads/main/doc/brs.png" /> </div> </br> <div align="center"><em> A simple Python package for refreshing the temporary security credentials in a <code>boto3.session.Session</code> object automatically. </em></div> </br> <div align="center"> <a href="https://pypi.org/project/boto3-refresh-session/"> <img src="https://img.shields.io/pypi/v/boto3-refresh-session?color=%23FF0000FF&logo=python&label=Latest%20Version" alt="pypi_version" /> </a> <a href="https://pypi.org/project/boto3-refresh-session/"> <img src="https://img.shields.io/pypi/pyversions/boto3-refresh-session?style=pypi&color=%23FF0000FF&logo=python&label=Compatible%20Python%20Versions" alt="py_version" /> </a> <a href="https://github.com/michaelthomasletts/boto3-refresh-session/actions/workflows/push.yml"> <img src="https://img.shields.io/github/actions/workflow/status/michaelthomasletts/boto3-refresh-session/push.yml?logo=github&color=%23FF0000FF&label=Build" alt="workflow" /> </a> <a href="https://github.com/michaelthomasletts/boto3-refresh-session/commits/main"> <img src="https://img.shields.io/github/last-commit/michaelthomasletts/boto3-refresh-session?logo=github&color=%23FF0000FF&label=Last%20Commit" alt="last_commit" /> </a> <a href="https://github.com/michaelthomasletts/boto3-refresh-session/stargazers"> <img src="https://img.shields.io/github/stars/michaelthomasletts/boto3-refresh-session?style=flat&logo=github&labelColor=555&color=FF0000&label=Stars" alt="stars" /> </a> <a href="https://pepy.tech/projects/boto3-refresh-session"> <img src="https://img.shields.io/endpoint?url=https%3A%2F%2Fmichaelthomasletts.github.io%2Fpepy-stats%2Fboto3-refresh-session.json&style=flat&logo=python&labelColor=555&color=FF0000" alt="downloads" /> </a> <a href="https://michaelthomasletts.github.io/boto3-refresh-session/index.html"> <img src="https://img.shields.io/badge/Official%20Documentation-📘-FF0000?style=flat&labelColor=555&logo=readthedocs" alt="documentation" /> </a> <a href="https://github.com/michaelthomasletts/boto3-refresh-session"> <img src="https://img.shields.io/badge/Source%20Code-💻-FF0000?style=flat&labelColor=555&logo=github" alt="github" /> </a> <a href="https://michaelthomasletts.github.io/boto3-refresh-session/qanda.html"> <img src="https://img.shields.io/badge/Q%26A-❔-FF0000?style=flat&labelColor=555&logo=vercel&label=Q%26A" alt="qanda" /> </a> <a href="https://michaelthomasletts.github.io/blog/brs-rationale/"> <img src="https://img.shields.io/badge/Blog%20Post-📘-FF0000?style=flat&labelColor=555&logo=readthedocs" alt="blog" /> </a> <a href="https://github.com/sponsors/michaelthomasletts"> <img src="https://img.shields.io/badge/Sponsor%20this%20Project-💙-FF0000?style=flat&labelColor=555&logo=githubsponsors" alt="sponsorship" /> </a> </div> ## 😛 Features - Drop-in replacement for `boto3.session.Session` - MFA support included for STS - SSO support via AWS profiles - Optionally caches boto3 clients - Supports automatic temporary credential refresh for: - **STS** - **IoT Core** - X.509 certificates w/ role aliases over mTLS (PEM files and PKCS#11) - MQTT actions are available! - [Tested](https://github.com/michaelthomasletts/boto3-refresh-session/tree/main/tests), [documented](https://michaelthomasletts.github.io/boto3-refresh-session/index.html), and [published to PyPI](https://pypi.org/project/boto3-refresh-session/) ## 😌 Recognition and Testimonials [Featured in TL;DR Sec.](https://tldrsec.com/p/tldr-sec-282) [Featured in CloudSecList.](https://cloudseclist.com/issues/issue-290) Recognized during AWS Community Day Midwest on June 5th, 2025 (the founder's birthday!). A testimonial from a Cyber Security Engineer at a FAANG company: > _Most of my work is on tooling related to AWS security, so I'm pretty choosy about boto3 credentials-adjacent code. I often opt to just write this sort of thing myself so I at least know that I can reason about it. But I found boto3-refresh-session to be very clean and intuitive [...] We're using the RefreshableSession class as part of a client cache construct [...] We're using AWS Lambda to perform lots of operations across several regions in hundreds of accounts, over and over again, all day every day. And it turns out that there's a surprising amount of overhead to creating boto3 clients (mostly deserializing service definition json), so we can run MUCH more efficiently if we keep a cache of clients, all equipped with automatically refreshing sessions._ ## 💻 Installation ```bash pip install boto3-refresh-session ``` ## 📝 Usage <details> <summary><strong>Core Concepts (click to expand)</strong></summary> ### Core Concepts 1. `RefreshableSession` is the intended interface for using `boto3-refresh-session`. Whether you're using this package to refresh temporary credentials returned by STS, the IoT credential provider (which is really just STS, but I digress), or some custom authentication or credential provider, `RefreshableSession` is where you *ought to* be working when using `boto3-refresh-session`. 2. *You can use all of the same keyword parameters normally associated with `boto3.session.Session`!* For instance, suppose you want to pass `region_name` to `RefreshableSession` as a parameter, whereby it's passed to `boto3.session.Session`. That's perfectly fine! Just pass it like you normally would when initializing `boto3.session.Session`. These keyword parameters are *completely optional*, though. If you're confused, the main idea to remember is this: if initializing `boto3.session.Session` *requires* a particular keyword parameter then pass it to `RefreshableSession`; if not, don't worry about it. 3. To tell `RefreshableSession` which AWS service you're working with for authentication and credential retrieval purposes (STS vs. IoT vs. some custom credential provider), you'll need to pass a `method` parameter to `RefreshableSession`. Since the `service_name` namespace is already occupied by `boto3.sesssion.Session`, [`boto3-refresh-session` uses `method` instead of "service" so as to avoid confusion](https://github.com/michaelthomasletts/boto3-refresh-session/blob/04acb2adb34e505c4dc95711f6b2f97748a2a489/boto3_refresh_session/utils/typing.py#L40). If you're using `RefreshableSession` for STS, however, then `method` is set to `"sts"` by default. You don't need to pass the `method` keyword argument in that case. 4. Using `RefreshableSession` for STS, IoT, or custom flows requires different keyword parameters that are unique to those particular methods. For instance, `STSRefreshableSession`, which is the engine for STS in `boto3-refresh-session`, requires `assume_role_kwargs` and optionally allows `sts_client_kwargs` whereas `CustomRefreshableSession` and `IoTX509RefreshableSession` do not. To familiarize yourself with the keyword parameters for each method, check the documentation for each of those engines [in the Refresh Strategies section here](https://michaelthomasletts.com/boto3-refresh-session/modules/index.html). 5. Irrespective of whatever `method` you pass as a keyword parameter, `RefreshableSession` accepts a keyword parameter named `defer_refresh`. Basically, this boolean tells `boto3-refresh-session` either to refresh credentials *the moment they expire* or to *wait until credentials are explicitly needed*. If you are working in a low-latency environment then `defer_refresh = False` might be helpful. For most users, however, `defer_refresh = True` is most desirable. For that reason, `defer_refresh = True` is the default value. Most users, therefore, should not concern themselves too much with this feature. 6. Some developers struggle to imagine where `boto3-refresh-session` might be helpful. To figure out if `boto3-refresh-session` is for your use case, or whether `credential_process` satisfies your needs, check out [this blog post](https://michaelthomasletts.com/blog/brs-rationale/). `boto3-refresh-session` is not for every developer or use-case; it is a niche tool. 7. `boto3-refresh-session` supports client caching in order to minimize the massive memory footprint associated with duplicative clients. By default, `RefreshableSession` caches clients. To deactivate this feature, set `cache_clients=False`. 8. `boto3-refresh-session` supports MFA. Refer to the MFA section further below for more details. 9. `boto3-refresh-session` supports SSO; however, it _does not_ and _will never_ automatically handle `sso login` for you -- that is, not unless you write your own hacky custom credential getter and pass that to `RefreshableSession(method="custom", ...)`, which I do not recommend (but cannot prevent you from doing). </details> <details> <summary><strong>Clients (click to expand)</strong></summary> ### Clients Most developers who use `boto3` interact primarily with `boto3.client` instead of `boto3.session.Session`. But many developers may not realize that `boto3.session.Session` belies `boto3.client`! In fact, that's precisely what makes `boto3-refresh-session` possible! Before we get to initializing clients via `RefreshableSession`, however, let's briefly talk about `boto3` clients and memory . . . Clients consume a shocking amount of memory. So much so that many developers create their own bespoke client cache. To minimize the memory footprint associated with duplicative clients, as well as make the lives of developers a little easier, `boto3-refresh-session` includes a `cache_clients` parameter which, by default, caches clients according to the parameters passed to the `client` method! With client caching out of the way, in order to use the `boto3.client` interface, but with the benefits of `boto3-refresh-session`, you have a few options! In the following examples, let's assume you want to use STS for retrieving temporary credentials for the sake of simplicity. Let's also focus specifically on `client`. Switching to `resource` follows the same exact idioms as below, except that `client` must be switched to `resource` in the pseudo-code, obviously. If you are not sure how to use `RefreshableSession` for STS (or custom auth flows) then check the usage instructions in the following sections! ##### `RefreshableSession.client` (Recommended) So long as you reuse the same `session` object when creating `client` objects, this approach can be used everywhere in your code. ```python from boto3_refresh_session import RefreshableSession assume_role_kwargs = { "RoleArn": "<your-role-arn>", "RoleSessionName": "<your-role-session-name>", "DurationSeconds": "<your-selection>", ... } session = RefreshableSession(assume_role_kwargs=assume_role_kwargs) s3 = session.client("s3") ``` ##### `DEFAULT_SESSION` This technique can be helpful if you want to use the same instance of `RefreshableSession` everywhere in your code without reference to `boto3_refresh_session`! ```python from boto3 import DEFAULT_SESSION, client from boto3_refresh_session import RefreshableSession assume_role_kwargs = { "RoleArn": "<your-role-arn>", "RoleSessionName": "<your-role-session-name>", "DurationSeconds": "<your-selection>", ... } DEFAULT_SESSION = RefreshableSession(assume_role_kwargs=assume_role_kwargs) s3 = client("s3") ``` ##### `botocore_session` ```python from boto3 import client from boto3_refresh_session import RefreshableSession assume_role_kwargs = { "RoleArn": "<your-role-arn>", "RoleSessionName": "<your-role-session-name>", "DurationSeconds": "<your-selection>", ... } s3 = client( service_name="s3", botocore_session=RefreshableSession(assume_role_kwargs=assume_role_kwargs) ) ``` </details> <details> <summary><strong>STS (click to expand)</strong></summary> ### STS Most developers use AWS STS to assume an IAM role and return a set of temporary security credentials. boto3-refresh-session can be used to ensure those temporary credentials refresh automatically. For additional information on the exact parameters that `RefreshableSession` takes for STS, [check this documentation](https://michaelthomasletts.com/boto3-refresh-session/modules/generated/boto3_refresh_session.methods.sts.STSRefreshableSession.html). ```python import boto3_refresh_session as brs assume_role_kwargs = { "RoleArn": "<your IAM role arn>", # required "RoleSessionName": "<your role session name>", # required ... } session = brs.RefreshableSession( assume_role_kwargs=assume_role_kwargs, # required sts_client_kwargs={...}, # optional ... # misc. params for boto3.session.Session ) ``` </details> <details> <summary><strong>MFA (click to expand)</strong></summary> ### MFA Support When assuming a role that requires MFA, `boto3-refresh-session` supports automatic token provisioning through the `mfa_token_provider` parameter. This parameter accepts a callable that returns a fresh MFA token code (string) whenever credentials need to be refreshed. The `mfa_token_provider` approach is **strongly recommended** over manually providing `TokenCode` in `assume_role_kwargs`, as MFA tokens expire after 30 seconds while AWS temporary credentials can last for hours. By using a callable, your application can automatically fetch fresh tokens on each refresh without manual intervention. There is nothing preventing you from manually providing `TokenCode` *without* `mfa_token_provider`; however, *you* will be responsible for updating `TokenCode` *before* automatic temporary credential refresh occurs, which is likely to be a fragile and complicated approach. When using `mfa_token_provider`, you must also provide `SerialNumber` (your MFA device ARN) in `assume_role_kwargs`. For additional information on the exact parameters that `RefreshableSession` takes for MFA, [check this documentation](https://michaelthomasletts.com/boto3-refresh-session/modules/generated/boto3_refresh_session.methods.sts.STSRefreshableSession.html). ⚠️ Most developers will probably find example number four most helpful. #### Examples ```python import boto3_refresh_session as brs # Example 1: Interactive prompt for MFA token def get_mfa_token(): return input("Enter MFA token: ") # we'll reuse this object in each example for simplicity :) assume_role_kwargs = { "RoleArn": "<your-role-arn>", "RoleSessionName": "<your-role-session-name>", "SerialNumber": "arn:aws:iam::123456789012:mfa/your-user", # required with mfa_token_provider } session = brs.RefreshableSession( assume_role_kwargs=assume_role_kwargs, mfa_token_provider=get_mfa_token, # callable that returns MFA token ) # Example 2: Using pyotp for TOTP-based MFA import pyotp def get_totp_token(): totp = pyotp.TOTP("<your-secret-key>") return totp.now() session = brs.RefreshableSession( assume_role_kwargs=assume_role_kwargs, mfa_token_provider=get_totp_token, ) # Example 3: Retrieving token from environment variable or external service import os def get_env_token(): return os.environ.get("AWS_MFA_TOKEN", "") session = brs.RefreshableSession( assume_role_kwargs=assume_role_kwargs, mfa_token_provider=get_env_token, ) # Example 4: Using Yubikey (or any token provider CLI) from typing import Sequence import subprocess def mfa_token_provider(cmd: Sequence[str], timeout: float): p = subprocess.run( list(cmd), check=False, capture_output=True, text=True, timeout=timeout, ) return (p.stdout or "").strip() mfa_token_provider_args = { "cmd": ["ykman", "oath", "code", "--single", "AWS-prod"], # example token source "timeout": 3.0, } session = RefreshableSession( assume_role_kwargs=assume_role_kwargs, mfa_token_provider=mfa_token_provider, mfa_token_provider_args=mfa_token_provider_args, ) ``` </details> <details> <summary><strong>SSO (click to expand)</strong></summary> ### SSO `boto3-refresh-session` supports SSO by virtue of AWS profiles. The below pseudo-code illustrates how to assume an IAM role using an AWS profile with SSO. Not shown, however, is running `sso login` manually, which `boto3-refresh-session` does not perform automatically for you. Therefore, you must manually run `sso login` as necessary. If you wish to automate `sso login` (not recommended) then you will need to write your own custom callable function and pass it to `RefreshableSession(method="custom", ...)`. In that event, please refer to the `Custom` documentation found in a separate section below. ```python from boto3_refresh_session import RefreshableSession session = RefreshableSession( assume_role_kwargs={ "RoleArn": "<your IAM role arn>", "RoleSessionName": "<your role session name>", }, profile_name="<your AWS profile name>", ... ) s3 = session.client("s3") ``` </details> <details> <summary><strong>Custom (click to expand)</strong></summary> ### Custom If you have a highly sophisticated, novel, or idiosyncratic authentication flow not included in boto3-refresh-session then you will need to provide your own custom temporary credentials callable object. `RefreshableSession` accepts custom credentials callable objects, as shown below. For additional information on the exact parameters that `RefreshableSession` takes for custom authentication flows, [check this documentation](https://michaelthomasletts.com/boto3-refresh-session/modules/generated/boto3_refresh_session.methods.custom.CustomRefreshableSession.html#boto3_refresh_session.methods.custom.CustomRefreshableSession). ```python # create (or import) your custom credential method def your_custom_credential_getter(...): ... return { "access_key": ..., "secret_key": ..., "token": ..., "expiry_time": ..., } # and pass it to RefreshableSession session = RefreshableSession( method="custom", # required custom_credentials_method=your_custom_credential_getter, # required custom_credentials_method_args=..., # optional region_name=region_name, # optional profile_name=profile_name, # optional ... # misc. boto3.session.Session params ) ``` </details> <details> <summary><strong>IoT Core X.509 (click to expand)</strong></summary> ### IoT Core X.509 AWS IoT Core can vend temporary AWS credentials through the **credentials provider** when you connect with an X.509 certificate and a **role alias**. `boto3-refresh-session` makes this flow seamless by automatically refreshing credentials over **mTLS**. For additional information on the exact parameters that `IOTX509RefreshableSession` takes, [check this documentation](https://michaelthomasletts.com/boto3-refresh-session/modules/generated/boto3_refresh_session.methods.iot.x509.IOTX509RefreshableSession.html). ### PEM file ```python import boto3_refresh_session as brs # PEM certificate + private key example session = brs.RefreshableSession( method="iot", endpoint="<your-credentials-endpoint>.credentials.iot.<region>.amazonaws.com", role_alias="<your-role-alias>", certificate="/path/to/certificate.pem", private_key="/path/to/private-key.pem", thing_name="<your-thing-name>", # optional, if used in policies duration_seconds=3600, # optional, capped by role alias region_name="us-east-1", ) # Now you can use the session like any boto3 session s3 = session.client("s3") print(s3.list_buckets()) ``` ### PKCS#11 ```python session = brs.RefreshableSession( method="iot", endpoint="<your-credentials-endpoint>.credentials.iot.<region>.amazonaws.com", role_alias="<your-role-alias>", certificate="/path/to/certificate.pem", pkcs11={ "pkcs11_lib": "/usr/local/lib/softhsm/libsofthsm2.so", "user_pin": "1234", "slot_id": 0, "token_label": "MyToken", "private_key_label": "MyKey", }, thing_name="<your-thing-name>", region_name="us-east-1", ) ``` ### MQTT After initializing a session object, you can can begin making actions with MQTT using the [mqtt method](https://github.com/michaelthomasletts/boto3-refresh-session/blob/deb68222925bf648f26e878ed4bc24b45317c7db/boto3_refresh_session/methods/iot/x509.py#L367)! You can reuse the same certificate, private key, et al as that used to initialize `RefreshableSession`. Or, alternatively, you can provide separate PKCS#11 or certificate information, whether those be file paths or bytes values. Either way, at a minimum, you will need to provide the endpoint and client identifier (i.e. thing name). ```python from awscrt.mqtt.QoS import AT_LEAST_ONCE conn = session.mqtt( endpoint="<your endpoint>-ats.iot.<region>.amazonaws.com", client_id="<your thing name or client ID>", ) conn.connect() conn.connect().result() conn.publish(topic="foo/bar", payload=b"hi", qos=AT_LEAST_ONCE) conn.disconnect().result() ``` </details> ## ⚠️ Changes Browse through the various changes to `boto3-refresh-session` over time. #### 😥 v3.0.0 **The changes introduced by v3.0.0 will not impact ~99% of users** who generally interact with `boto3-refresh-session` by only `RefreshableSession`, *which is the intended usage for this package after all.* Advanced users, however, particularly those using low-level objects such as `BaseRefreshableSession | refreshable_session | BRSSession | utils.py`, may experience breaking changes. Please review [this PR](https://github.com/michaelthomasletts/boto3-refresh-session/pull/75) for additional details. #### ✂️ v4.0.0 The `ecs` module has been dropped. For additional details and rationale, please review [this PR](https://github.com/michaelthomasletts/boto3-refresh-session/pull/78). #### 😛 v5.0.0 Support for IoT Core via X.509 certificate-based authentication (over HTTPS) is now available! #### ➕ v5.1.0 MQTT support added for IoT Core via X.509 certificate-based authentication. #### ➕ v6.0.0 MFA support for STS added! #### 🔒😥 v6.2.0 - Client caching introduced to `RefreshableSession` in order to minimize memory footprint! Available via `cache_clients` parameter. - Testing suite expanded to include IOT, MFA, caching, and much more! - A subtle bug was uncovered where `RefreshableSession` created refreshable credentials but boto3's underlying session continued to resolve credentials via the default provider chain (i.e. env vars, shared config, etc) unless explicitly wired. `get_credentials()` and clients could, in certain setups, use base session credentials instead of the refreshable STS/IoT/custom credentials via assumed role. To fix this, I updated the implementation in `BRSSession.__post_init__` to set `self._session._credentials = self._credentials`, ensuring all boto3 clients created from `RefreshableSession` use the refreshable credentials source of truth provided to `RefreshableCredentials | DeferredRefreshableCredentials`. After this change, refreshable credentials are used consistently everywhere, irrespective of setup. #### ✂️ v6.2.3 - The `RefreshableTemporaryCredentials` type hint was deprecated in favor of `TemporaryCredentials`. - `expiry_time` was added as a parameter returned by the `refreshable_credentials` method and `credentials` attribute.
text/markdown
Mike Letts
lettsmt@gmail.com
Michael Letts
lettsmt@gmail.com
MIT
boto3, botocore, aws, sts, credentials, token, refresh, iot, x509, mqtt
[ "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14" ]
[]
https://github.com/michaelthomasletts/boto3-refresh-session
null
>=3.10
[]
[]
[]
[ "boto3", "botocore", "requests", "typing-extensions", "awscrt", "awsiotsdk" ]
[]
[]
[]
[ "Repository, https://github.com/michaelthomasletts/boto3-refresh-session", "Documentation, https://michaelthomasletts.github.io/boto3-refresh-session/index.html" ]
poetry/2.2.1 CPython/3.10.19 Linux/6.11.0-1018-azure
2026-01-16T04:28:29.880928
boto3_refresh_session-6.2.3-py3-none-any.whl
30,355
17/94/31a82fff6e9cb051e5999e9be27a0c338f587da941e98b808a5595656c0d/boto3_refresh_session-6.2.3-py3-none-any.whl
py3
bdist_wheel
null
false
f2644232fbbb8483f4d794743e3d1b7a
a2b61542f3682993695e799de0e1a9e7b0efbb925fc0424d95fbc549886a765d
179431a82fff6e9cb051e5999e9be27a0c338f587da941e98b808a5595656c0d
null
[]
2.4
boto3-refresh-session
6.2.3
A simple Python package for refreshing the temporary security credentials in a boto3.session.Session object automatically.
<div align="center"> <img src="https://raw.githubusercontent.com/michaelthomasletts/boto3-refresh-session/refs/heads/main/doc/brs.png" /> </div> </br> <div align="center"><em> A simple Python package for refreshing the temporary security credentials in a <code>boto3.session.Session</code> object automatically. </em></div> </br> <div align="center"> <a href="https://pypi.org/project/boto3-refresh-session/"> <img src="https://img.shields.io/pypi/v/boto3-refresh-session?color=%23FF0000FF&logo=python&label=Latest%20Version" alt="pypi_version" /> </a> <a href="https://pypi.org/project/boto3-refresh-session/"> <img src="https://img.shields.io/pypi/pyversions/boto3-refresh-session?style=pypi&color=%23FF0000FF&logo=python&label=Compatible%20Python%20Versions" alt="py_version" /> </a> <a href="https://github.com/michaelthomasletts/boto3-refresh-session/actions/workflows/push.yml"> <img src="https://img.shields.io/github/actions/workflow/status/michaelthomasletts/boto3-refresh-session/push.yml?logo=github&color=%23FF0000FF&label=Build" alt="workflow" /> </a> <a href="https://github.com/michaelthomasletts/boto3-refresh-session/commits/main"> <img src="https://img.shields.io/github/last-commit/michaelthomasletts/boto3-refresh-session?logo=github&color=%23FF0000FF&label=Last%20Commit" alt="last_commit" /> </a> <a href="https://github.com/michaelthomasletts/boto3-refresh-session/stargazers"> <img src="https://img.shields.io/github/stars/michaelthomasletts/boto3-refresh-session?style=flat&logo=github&labelColor=555&color=FF0000&label=Stars" alt="stars" /> </a> <a href="https://pepy.tech/projects/boto3-refresh-session"> <img src="https://img.shields.io/endpoint?url=https%3A%2F%2Fmichaelthomasletts.github.io%2Fpepy-stats%2Fboto3-refresh-session.json&style=flat&logo=python&labelColor=555&color=FF0000" alt="downloads" /> </a> <a href="https://michaelthomasletts.github.io/boto3-refresh-session/index.html"> <img src="https://img.shields.io/badge/Official%20Documentation-📘-FF0000?style=flat&labelColor=555&logo=readthedocs" alt="documentation" /> </a> <a href="https://github.com/michaelthomasletts/boto3-refresh-session"> <img src="https://img.shields.io/badge/Source%20Code-💻-FF0000?style=flat&labelColor=555&logo=github" alt="github" /> </a> <a href="https://michaelthomasletts.github.io/boto3-refresh-session/qanda.html"> <img src="https://img.shields.io/badge/Q%26A-❔-FF0000?style=flat&labelColor=555&logo=vercel&label=Q%26A" alt="qanda" /> </a> <a href="https://michaelthomasletts.github.io/blog/brs-rationale/"> <img src="https://img.shields.io/badge/Blog%20Post-📘-FF0000?style=flat&labelColor=555&logo=readthedocs" alt="blog" /> </a> <a href="https://github.com/sponsors/michaelthomasletts"> <img src="https://img.shields.io/badge/Sponsor%20this%20Project-💙-FF0000?style=flat&labelColor=555&logo=githubsponsors" alt="sponsorship" /> </a> </div> ## 😛 Features - Drop-in replacement for `boto3.session.Session` - MFA support included for STS - SSO support via AWS profiles - Optionally caches boto3 clients - Supports automatic temporary credential refresh for: - **STS** - **IoT Core** - X.509 certificates w/ role aliases over mTLS (PEM files and PKCS#11) - MQTT actions are available! - [Tested](https://github.com/michaelthomasletts/boto3-refresh-session/tree/main/tests), [documented](https://michaelthomasletts.github.io/boto3-refresh-session/index.html), and [published to PyPI](https://pypi.org/project/boto3-refresh-session/) ## 😌 Recognition and Testimonials [Featured in TL;DR Sec.](https://tldrsec.com/p/tldr-sec-282) [Featured in CloudSecList.](https://cloudseclist.com/issues/issue-290) Recognized during AWS Community Day Midwest on June 5th, 2025 (the founder's birthday!). A testimonial from a Cyber Security Engineer at a FAANG company: > _Most of my work is on tooling related to AWS security, so I'm pretty choosy about boto3 credentials-adjacent code. I often opt to just write this sort of thing myself so I at least know that I can reason about it. But I found boto3-refresh-session to be very clean and intuitive [...] We're using the RefreshableSession class as part of a client cache construct [...] We're using AWS Lambda to perform lots of operations across several regions in hundreds of accounts, over and over again, all day every day. And it turns out that there's a surprising amount of overhead to creating boto3 clients (mostly deserializing service definition json), so we can run MUCH more efficiently if we keep a cache of clients, all equipped with automatically refreshing sessions._ ## 💻 Installation ```bash pip install boto3-refresh-session ``` ## 📝 Usage <details> <summary><strong>Core Concepts (click to expand)</strong></summary> ### Core Concepts 1. `RefreshableSession` is the intended interface for using `boto3-refresh-session`. Whether you're using this package to refresh temporary credentials returned by STS, the IoT credential provider (which is really just STS, but I digress), or some custom authentication or credential provider, `RefreshableSession` is where you *ought to* be working when using `boto3-refresh-session`. 2. *You can use all of the same keyword parameters normally associated with `boto3.session.Session`!* For instance, suppose you want to pass `region_name` to `RefreshableSession` as a parameter, whereby it's passed to `boto3.session.Session`. That's perfectly fine! Just pass it like you normally would when initializing `boto3.session.Session`. These keyword parameters are *completely optional*, though. If you're confused, the main idea to remember is this: if initializing `boto3.session.Session` *requires* a particular keyword parameter then pass it to `RefreshableSession`; if not, don't worry about it. 3. To tell `RefreshableSession` which AWS service you're working with for authentication and credential retrieval purposes (STS vs. IoT vs. some custom credential provider), you'll need to pass a `method` parameter to `RefreshableSession`. Since the `service_name` namespace is already occupied by `boto3.sesssion.Session`, [`boto3-refresh-session` uses `method` instead of "service" so as to avoid confusion](https://github.com/michaelthomasletts/boto3-refresh-session/blob/04acb2adb34e505c4dc95711f6b2f97748a2a489/boto3_refresh_session/utils/typing.py#L40). If you're using `RefreshableSession` for STS, however, then `method` is set to `"sts"` by default. You don't need to pass the `method` keyword argument in that case. 4. Using `RefreshableSession` for STS, IoT, or custom flows requires different keyword parameters that are unique to those particular methods. For instance, `STSRefreshableSession`, which is the engine for STS in `boto3-refresh-session`, requires `assume_role_kwargs` and optionally allows `sts_client_kwargs` whereas `CustomRefreshableSession` and `IoTX509RefreshableSession` do not. To familiarize yourself with the keyword parameters for each method, check the documentation for each of those engines [in the Refresh Strategies section here](https://michaelthomasletts.com/boto3-refresh-session/modules/index.html). 5. Irrespective of whatever `method` you pass as a keyword parameter, `RefreshableSession` accepts a keyword parameter named `defer_refresh`. Basically, this boolean tells `boto3-refresh-session` either to refresh credentials *the moment they expire* or to *wait until credentials are explicitly needed*. If you are working in a low-latency environment then `defer_refresh = False` might be helpful. For most users, however, `defer_refresh = True` is most desirable. For that reason, `defer_refresh = True` is the default value. Most users, therefore, should not concern themselves too much with this feature. 6. Some developers struggle to imagine where `boto3-refresh-session` might be helpful. To figure out if `boto3-refresh-session` is for your use case, or whether `credential_process` satisfies your needs, check out [this blog post](https://michaelthomasletts.com/blog/brs-rationale/). `boto3-refresh-session` is not for every developer or use-case; it is a niche tool. 7. `boto3-refresh-session` supports client caching in order to minimize the massive memory footprint associated with duplicative clients. By default, `RefreshableSession` caches clients. To deactivate this feature, set `cache_clients=False`. 8. `boto3-refresh-session` supports MFA. Refer to the MFA section further below for more details. 9. `boto3-refresh-session` supports SSO; however, it _does not_ and _will never_ automatically handle `sso login` for you -- that is, not unless you write your own hacky custom credential getter and pass that to `RefreshableSession(method="custom", ...)`, which I do not recommend (but cannot prevent you from doing). </details> <details> <summary><strong>Clients (click to expand)</strong></summary> ### Clients Most developers who use `boto3` interact primarily with `boto3.client` instead of `boto3.session.Session`. But many developers may not realize that `boto3.session.Session` belies `boto3.client`! In fact, that's precisely what makes `boto3-refresh-session` possible! Before we get to initializing clients via `RefreshableSession`, however, let's briefly talk about `boto3` clients and memory . . . Clients consume a shocking amount of memory. So much so that many developers create their own bespoke client cache. To minimize the memory footprint associated with duplicative clients, as well as make the lives of developers a little easier, `boto3-refresh-session` includes a `cache_clients` parameter which, by default, caches clients according to the parameters passed to the `client` method! With client caching out of the way, in order to use the `boto3.client` interface, but with the benefits of `boto3-refresh-session`, you have a few options! In the following examples, let's assume you want to use STS for retrieving temporary credentials for the sake of simplicity. Let's also focus specifically on `client`. Switching to `resource` follows the same exact idioms as below, except that `client` must be switched to `resource` in the pseudo-code, obviously. If you are not sure how to use `RefreshableSession` for STS (or custom auth flows) then check the usage instructions in the following sections! ##### `RefreshableSession.client` (Recommended) So long as you reuse the same `session` object when creating `client` objects, this approach can be used everywhere in your code. ```python from boto3_refresh_session import RefreshableSession assume_role_kwargs = { "RoleArn": "<your-role-arn>", "RoleSessionName": "<your-role-session-name>", "DurationSeconds": "<your-selection>", ... } session = RefreshableSession(assume_role_kwargs=assume_role_kwargs) s3 = session.client("s3") ``` ##### `DEFAULT_SESSION` This technique can be helpful if you want to use the same instance of `RefreshableSession` everywhere in your code without reference to `boto3_refresh_session`! ```python from boto3 import DEFAULT_SESSION, client from boto3_refresh_session import RefreshableSession assume_role_kwargs = { "RoleArn": "<your-role-arn>", "RoleSessionName": "<your-role-session-name>", "DurationSeconds": "<your-selection>", ... } DEFAULT_SESSION = RefreshableSession(assume_role_kwargs=assume_role_kwargs) s3 = client("s3") ``` ##### `botocore_session` ```python from boto3 import client from boto3_refresh_session import RefreshableSession assume_role_kwargs = { "RoleArn": "<your-role-arn>", "RoleSessionName": "<your-role-session-name>", "DurationSeconds": "<your-selection>", ... } s3 = client( service_name="s3", botocore_session=RefreshableSession(assume_role_kwargs=assume_role_kwargs) ) ``` </details> <details> <summary><strong>STS (click to expand)</strong></summary> ### STS Most developers use AWS STS to assume an IAM role and return a set of temporary security credentials. boto3-refresh-session can be used to ensure those temporary credentials refresh automatically. For additional information on the exact parameters that `RefreshableSession` takes for STS, [check this documentation](https://michaelthomasletts.com/boto3-refresh-session/modules/generated/boto3_refresh_session.methods.sts.STSRefreshableSession.html). ```python import boto3_refresh_session as brs assume_role_kwargs = { "RoleArn": "<your IAM role arn>", # required "RoleSessionName": "<your role session name>", # required ... } session = brs.RefreshableSession( assume_role_kwargs=assume_role_kwargs, # required sts_client_kwargs={...}, # optional ... # misc. params for boto3.session.Session ) ``` </details> <details> <summary><strong>MFA (click to expand)</strong></summary> ### MFA Support When assuming a role that requires MFA, `boto3-refresh-session` supports automatic token provisioning through the `mfa_token_provider` parameter. This parameter accepts a callable that returns a fresh MFA token code (string) whenever credentials need to be refreshed. The `mfa_token_provider` approach is **strongly recommended** over manually providing `TokenCode` in `assume_role_kwargs`, as MFA tokens expire after 30 seconds while AWS temporary credentials can last for hours. By using a callable, your application can automatically fetch fresh tokens on each refresh without manual intervention. There is nothing preventing you from manually providing `TokenCode` *without* `mfa_token_provider`; however, *you* will be responsible for updating `TokenCode` *before* automatic temporary credential refresh occurs, which is likely to be a fragile and complicated approach. When using `mfa_token_provider`, you must also provide `SerialNumber` (your MFA device ARN) in `assume_role_kwargs`. For additional information on the exact parameters that `RefreshableSession` takes for MFA, [check this documentation](https://michaelthomasletts.com/boto3-refresh-session/modules/generated/boto3_refresh_session.methods.sts.STSRefreshableSession.html). ⚠️ Most developers will probably find example number four most helpful. #### Examples ```python import boto3_refresh_session as brs # Example 1: Interactive prompt for MFA token def get_mfa_token(): return input("Enter MFA token: ") # we'll reuse this object in each example for simplicity :) assume_role_kwargs = { "RoleArn": "<your-role-arn>", "RoleSessionName": "<your-role-session-name>", "SerialNumber": "arn:aws:iam::123456789012:mfa/your-user", # required with mfa_token_provider } session = brs.RefreshableSession( assume_role_kwargs=assume_role_kwargs, mfa_token_provider=get_mfa_token, # callable that returns MFA token ) # Example 2: Using pyotp for TOTP-based MFA import pyotp def get_totp_token(): totp = pyotp.TOTP("<your-secret-key>") return totp.now() session = brs.RefreshableSession( assume_role_kwargs=assume_role_kwargs, mfa_token_provider=get_totp_token, ) # Example 3: Retrieving token from environment variable or external service import os def get_env_token(): return os.environ.get("AWS_MFA_TOKEN", "") session = brs.RefreshableSession( assume_role_kwargs=assume_role_kwargs, mfa_token_provider=get_env_token, ) # Example 4: Using Yubikey (or any token provider CLI) from typing import Sequence import subprocess def mfa_token_provider(cmd: Sequence[str], timeout: float): p = subprocess.run( list(cmd), check=False, capture_output=True, text=True, timeout=timeout, ) return (p.stdout or "").strip() mfa_token_provider_args = { "cmd": ["ykman", "oath", "code", "--single", "AWS-prod"], # example token source "timeout": 3.0, } session = RefreshableSession( assume_role_kwargs=assume_role_kwargs, mfa_token_provider=mfa_token_provider, mfa_token_provider_args=mfa_token_provider_args, ) ``` </details> <details> <summary><strong>SSO (click to expand)</strong></summary> ### SSO `boto3-refresh-session` supports SSO by virtue of AWS profiles. The below pseudo-code illustrates how to assume an IAM role using an AWS profile with SSO. Not shown, however, is running `sso login` manually, which `boto3-refresh-session` does not perform automatically for you. Therefore, you must manually run `sso login` as necessary. If you wish to automate `sso login` (not recommended) then you will need to write your own custom callable function and pass it to `RefreshableSession(method="custom", ...)`. In that event, please refer to the `Custom` documentation found in a separate section below. ```python from boto3_refresh_session import RefreshableSession session = RefreshableSession( assume_role_kwargs={ "RoleArn": "<your IAM role arn>", "RoleSessionName": "<your role session name>", }, profile_name="<your AWS profile name>", ... ) s3 = session.client("s3") ``` </details> <details> <summary><strong>Custom (click to expand)</strong></summary> ### Custom If you have a highly sophisticated, novel, or idiosyncratic authentication flow not included in boto3-refresh-session then you will need to provide your own custom temporary credentials callable object. `RefreshableSession` accepts custom credentials callable objects, as shown below. For additional information on the exact parameters that `RefreshableSession` takes for custom authentication flows, [check this documentation](https://michaelthomasletts.com/boto3-refresh-session/modules/generated/boto3_refresh_session.methods.custom.CustomRefreshableSession.html#boto3_refresh_session.methods.custom.CustomRefreshableSession). ```python # create (or import) your custom credential method def your_custom_credential_getter(...): ... return { "access_key": ..., "secret_key": ..., "token": ..., "expiry_time": ..., } # and pass it to RefreshableSession session = RefreshableSession( method="custom", # required custom_credentials_method=your_custom_credential_getter, # required custom_credentials_method_args=..., # optional region_name=region_name, # optional profile_name=profile_name, # optional ... # misc. boto3.session.Session params ) ``` </details> <details> <summary><strong>IoT Core X.509 (click to expand)</strong></summary> ### IoT Core X.509 AWS IoT Core can vend temporary AWS credentials through the **credentials provider** when you connect with an X.509 certificate and a **role alias**. `boto3-refresh-session` makes this flow seamless by automatically refreshing credentials over **mTLS**. For additional information on the exact parameters that `IOTX509RefreshableSession` takes, [check this documentation](https://michaelthomasletts.com/boto3-refresh-session/modules/generated/boto3_refresh_session.methods.iot.x509.IOTX509RefreshableSession.html). ### PEM file ```python import boto3_refresh_session as brs # PEM certificate + private key example session = brs.RefreshableSession( method="iot", endpoint="<your-credentials-endpoint>.credentials.iot.<region>.amazonaws.com", role_alias="<your-role-alias>", certificate="/path/to/certificate.pem", private_key="/path/to/private-key.pem", thing_name="<your-thing-name>", # optional, if used in policies duration_seconds=3600, # optional, capped by role alias region_name="us-east-1", ) # Now you can use the session like any boto3 session s3 = session.client("s3") print(s3.list_buckets()) ``` ### PKCS#11 ```python session = brs.RefreshableSession( method="iot", endpoint="<your-credentials-endpoint>.credentials.iot.<region>.amazonaws.com", role_alias="<your-role-alias>", certificate="/path/to/certificate.pem", pkcs11={ "pkcs11_lib": "/usr/local/lib/softhsm/libsofthsm2.so", "user_pin": "1234", "slot_id": 0, "token_label": "MyToken", "private_key_label": "MyKey", }, thing_name="<your-thing-name>", region_name="us-east-1", ) ``` ### MQTT After initializing a session object, you can can begin making actions with MQTT using the [mqtt method](https://github.com/michaelthomasletts/boto3-refresh-session/blob/deb68222925bf648f26e878ed4bc24b45317c7db/boto3_refresh_session/methods/iot/x509.py#L367)! You can reuse the same certificate, private key, et al as that used to initialize `RefreshableSession`. Or, alternatively, you can provide separate PKCS#11 or certificate information, whether those be file paths or bytes values. Either way, at a minimum, you will need to provide the endpoint and client identifier (i.e. thing name). ```python from awscrt.mqtt.QoS import AT_LEAST_ONCE conn = session.mqtt( endpoint="<your endpoint>-ats.iot.<region>.amazonaws.com", client_id="<your thing name or client ID>", ) conn.connect() conn.connect().result() conn.publish(topic="foo/bar", payload=b"hi", qos=AT_LEAST_ONCE) conn.disconnect().result() ``` </details> ## ⚠️ Changes Browse through the various changes to `boto3-refresh-session` over time. #### 😥 v3.0.0 **The changes introduced by v3.0.0 will not impact ~99% of users** who generally interact with `boto3-refresh-session` by only `RefreshableSession`, *which is the intended usage for this package after all.* Advanced users, however, particularly those using low-level objects such as `BaseRefreshableSession | refreshable_session | BRSSession | utils.py`, may experience breaking changes. Please review [this PR](https://github.com/michaelthomasletts/boto3-refresh-session/pull/75) for additional details. #### ✂️ v4.0.0 The `ecs` module has been dropped. For additional details and rationale, please review [this PR](https://github.com/michaelthomasletts/boto3-refresh-session/pull/78). #### 😛 v5.0.0 Support for IoT Core via X.509 certificate-based authentication (over HTTPS) is now available! #### ➕ v5.1.0 MQTT support added for IoT Core via X.509 certificate-based authentication. #### ➕ v6.0.0 MFA support for STS added! #### 🔒😥 v6.2.0 - Client caching introduced to `RefreshableSession` in order to minimize memory footprint! Available via `cache_clients` parameter. - Testing suite expanded to include IOT, MFA, caching, and much more! - A subtle bug was uncovered where `RefreshableSession` created refreshable credentials but boto3's underlying session continued to resolve credentials via the default provider chain (i.e. env vars, shared config, etc) unless explicitly wired. `get_credentials()` and clients could, in certain setups, use base session credentials instead of the refreshable STS/IoT/custom credentials via assumed role. To fix this, I updated the implementation in `BRSSession.__post_init__` to set `self._session._credentials = self._credentials`, ensuring all boto3 clients created from `RefreshableSession` use the refreshable credentials source of truth provided to `RefreshableCredentials | DeferredRefreshableCredentials`. After this change, refreshable credentials are used consistently everywhere, irrespective of setup. #### ✂️ v6.2.3 - The `RefreshableTemporaryCredentials` type hint was deprecated in favor of `TemporaryCredentials`. - `expiry_time` was added as a parameter returned by the `refreshable_credentials` method and `credentials` attribute.
text/markdown
Mike Letts
lettsmt@gmail.com
Michael Letts
lettsmt@gmail.com
MIT
boto3, botocore, aws, sts, credentials, token, refresh, iot, x509, mqtt
[ "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14" ]
[]
https://github.com/michaelthomasletts/boto3-refresh-session
null
>=3.10
[]
[]
[]
[ "boto3", "botocore", "requests", "typing-extensions", "awscrt", "awsiotsdk" ]
[]
[]
[]
[ "Repository, https://github.com/michaelthomasletts/boto3-refresh-session", "Documentation, https://michaelthomasletts.github.io/boto3-refresh-session/index.html" ]
poetry/2.2.1 CPython/3.10.19 Linux/6.11.0-1018-azure
2026-01-16T04:28:30.769836
boto3_refresh_session-6.2.3.tar.gz
29,646
77/57/97ec6e9264600e145307c6526e51dc9919f117a92bf09362fead3fc5d3bb/boto3_refresh_session-6.2.3.tar.gz
source
sdist
null
false
ca21c9d050dc22f022bc06dd782728f6
7b275e9867b7d1817b903d52d06e83c5af23e53809ed2bfbd20ec8497fa47b9f
775797ec6e9264600e145307c6526e51dc9919f117a92bf09362fead3fc5d3bb
null
[]
2.4
mtmeastmoney
1.1.4
A Personal Eastmoney Library
# EastMoney
text/markdown
null
null
null
null
MIT License Copyright (c) 2022 mondayfirst Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
null
[ "Development Status :: 3 - Alpha", "Programming Language :: Python :: 3.9", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: MacOS", "Operating System :: POSIX :: Linux", "Operating System :: Microsoft :: Windows" ]
[]
null
null
>=3.9
[]
[]
[]
[ "ddddocr", "requests", "mtmtool", "pandas" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:29:31.875240
mtmeastmoney-1.1.4-py3-none-any.whl
11,036
40/a7/734e7fdfe26114221242954dffc81ee9ba4fcc6dffaf80259b684d2956f6/mtmeastmoney-1.1.4-py3-none-any.whl
py3
bdist_wheel
null
false
134472da29d59696cae8fd46774d0af2
636c004629d8cb8bb85e281c517e64590f133036a1319905d1e9fb0339e07913
40a7734e7fdfe26114221242954dffc81ee9ba4fcc6dffaf80259b684d2956f6
null
[ "LICENSE" ]
2.4
mtmeastmoney
1.1.4
A Personal Eastmoney Library
# EastMoney
text/markdown
null
null
null
null
MIT License Copyright (c) 2022 mondayfirst Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
null
[ "Development Status :: 3 - Alpha", "Programming Language :: Python :: 3.9", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: MacOS", "Operating System :: POSIX :: Linux", "Operating System :: Microsoft :: Windows" ]
[]
null
null
>=3.9
[]
[]
[]
[ "ddddocr", "requests", "mtmtool", "pandas" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:29:32.975296
mtmeastmoney-1.1.4.tar.gz
15,115
65/6d/c2be4299a5ef0b97710389d5a9553b04159ce356ecc404ed3e1327e05141/mtmeastmoney-1.1.4.tar.gz
source
sdist
null
false
5833c24c7827dc0e68c9800e928e8158
5c26318cf557584f235709c49ca722ec2c3c369c4c9c0523b6b0295571919ae5
656dc2be4299a5ef0b97710389d5a9553b04159ce356ecc404ed3e1327e05141
null
[ "LICENSE" ]
2.4
tunacode-cli
0.1.34
Your agentic CLI developer.
# tunacode-cli <img src="docs/images/logo.jpeg" alt="tunacode logo" width="200"/> [![PyPI version](https://badge.fury.io/py/tunacode-cli.svg)](https://badge.fury.io/py/tunacode-cli) [![Downloads](https://pepy.tech/badge/tunacode-cli)](https://pepy.tech/project/tunacode-cli) [![Python 3.11+](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Discord Shield](https://discord.com/api/guilds/1447688577126367346/widget.png?style=shield)](https://discord.gg/TN7Fpynv6H) A TUI code agent. > **Note:** Under active development - expect bugs. ## Interface The Textual-based terminal user interface provides a clean, interactive environment for AI-assisted coding, with a design heavily inspired by the classic NeXTSTEP user interface. ![Agent Response Panel](docs/media/agent-response.png) *Agent response panel with formatted output* ![Read File Tool](docs/media/read-file-tool.png) *Tool rendering with syntax highlighting* ![Plan Approval](docs/media/plan-approval.png) *Structured plan approval workflow* ## Theme Support The interface supports multiple themes for different preferences and environments. Customize the appearance with built-in themes or create your own color schemes. ## Model Setup Configure your AI models and settings through the provided setup interface. **Note:** TunaCode has full bash shell access. This tool assumes you know what you're doing. If you're concerned, run it in a sandboxed environment. ## v0.1.1 - Major Rewrite This release is a complete rewrite with a new Textual-based TUI. **Upgrading from v1?** The legacy v1 codebase is preserved in the `legacy-v1` branch and will only receive security updates. ## Requirements - Python 3.11+ ## Installation ```bash uv tool install tunacode-cli ``` ## Quick Start 1. Run the setup wizard to configure your API key: ```bash tunacode --setup ``` 2. Start coding: ```bash tunacode ``` ## Configuration Set your API key as an environment variable or use the setup wizard: ```bash export OPENAI_API_KEY="your-key" # or export ANTHROPIC_API_KEY="your-key" ``` Config file location: `~/.config/tunacode.json` For advanced settings including **local mode** for small context models, see the [Configuration Guide](docs/configuration/README.md). ## Commands | Command | Description | | -------- | ---------------------------- | | /help | Show available commands | | /model | Change AI model | | /clear | Clear conversation history | | /yolo | Toggle auto-confirm mode | | /branch | Create and switch git branch | | /plan | Toggle read-only planning | | /theme | Change UI theme | | /resume | Load/delete saved sessions | | !<cmd> | Run shell command | | exit | Quit tunacode | ## LSP Integration (Beta) TunaCode includes experimental Language Server Protocol support for real-time diagnostics. When an LSP server is detected in your PATH, it activates automatically. **Supported languages:** | Language | LSP Server | | ---------- | ----------------------------- | | Python | `ruff server` | | TypeScript | `typescript-language-server` | | JavaScript | `typescript-language-server` | | Go | `gopls` | | Rust | `rust-analyzer` | Diagnostics appear in the UI when editing files. This feature is beta - expect rough edges. ## Discord Server Join our official discord server to receive help, show us how you're using tunacode, and chat about anything LLM. [<img src="https://discord.com/api/guilds/1447688577126367346/widget.png?style=banner3" alt="Discord Banner 3"/>](https://discord.gg/TN7Fpynv6H) ## License MIT
text/markdown
null
larock22 <noreply@github.com>
null
null
MIT
agent, automation, cli, development
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Topic :: Software Development", "Topic :: Utilities" ]
[]
null
null
<3.14,>=3.11
[]
[]
[]
[ "click<8.2.0,>=8.1.0", "defusedxml", "html2text>=2024.2.26", "pathspec>=0.12.1", "prompt-toolkit<4.0.0,>=3.0.52", "pydantic-ai<2.0.0,>=1.18.0", "pydantic<3.0.0,>=2.12.4", "pygments<3.0.0,>=2.19.2", "python-levenshtein>=0.21.0", "rich<15.0.0,>=14.2.0", "ruff>=0.14.0", "textual-autocomplete>=4.0...
[]
[]
[]
[ "Homepage, https://tunacode.xyz/", "Repository, https://github.com/alchemiststudiosDOTai/tunacode", "Issues, https://github.com/alchemiststudiosDOTai/tunacode/issues", "Documentation, https://github.com/alchemiststudiosDOTai/tunacode#readme" ]
twine/6.2.0 CPython/3.12.12
2026-01-16T04:29:33.122474
tunacode_cli-0.1.34-py3-none-any.whl
360,304
a2/a0/f2ca98edc58b70088842324fcd87185ceef105a999ee3343ccb7fc95c0f9/tunacode_cli-0.1.34-py3-none-any.whl
py3
bdist_wheel
null
false
9b7eea3a307d047dc37109791249dc8d
7f8036f2a9d82cbdf68012208e486f3e7d8f91ae562a101177f076274e393c02
a2a0f2ca98edc58b70088842324fcd87185ceef105a999ee3343ccb7fc95c0f9
null
[ "LICENSE" ]
2.4
tunacode-cli
0.1.34
Your agentic CLI developer.
# tunacode-cli <img src="docs/images/logo.jpeg" alt="tunacode logo" width="200"/> [![PyPI version](https://badge.fury.io/py/tunacode-cli.svg)](https://badge.fury.io/py/tunacode-cli) [![Downloads](https://pepy.tech/badge/tunacode-cli)](https://pepy.tech/project/tunacode-cli) [![Python 3.11+](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Discord Shield](https://discord.com/api/guilds/1447688577126367346/widget.png?style=shield)](https://discord.gg/TN7Fpynv6H) A TUI code agent. > **Note:** Under active development - expect bugs. ## Interface The Textual-based terminal user interface provides a clean, interactive environment for AI-assisted coding, with a design heavily inspired by the classic NeXTSTEP user interface. ![Agent Response Panel](docs/media/agent-response.png) *Agent response panel with formatted output* ![Read File Tool](docs/media/read-file-tool.png) *Tool rendering with syntax highlighting* ![Plan Approval](docs/media/plan-approval.png) *Structured plan approval workflow* ## Theme Support The interface supports multiple themes for different preferences and environments. Customize the appearance with built-in themes or create your own color schemes. ## Model Setup Configure your AI models and settings through the provided setup interface. **Note:** TunaCode has full bash shell access. This tool assumes you know what you're doing. If you're concerned, run it in a sandboxed environment. ## v0.1.1 - Major Rewrite This release is a complete rewrite with a new Textual-based TUI. **Upgrading from v1?** The legacy v1 codebase is preserved in the `legacy-v1` branch and will only receive security updates. ## Requirements - Python 3.11+ ## Installation ```bash uv tool install tunacode-cli ``` ## Quick Start 1. Run the setup wizard to configure your API key: ```bash tunacode --setup ``` 2. Start coding: ```bash tunacode ``` ## Configuration Set your API key as an environment variable or use the setup wizard: ```bash export OPENAI_API_KEY="your-key" # or export ANTHROPIC_API_KEY="your-key" ``` Config file location: `~/.config/tunacode.json` For advanced settings including **local mode** for small context models, see the [Configuration Guide](docs/configuration/README.md). ## Commands | Command | Description | | -------- | ---------------------------- | | /help | Show available commands | | /model | Change AI model | | /clear | Clear conversation history | | /yolo | Toggle auto-confirm mode | | /branch | Create and switch git branch | | /plan | Toggle read-only planning | | /theme | Change UI theme | | /resume | Load/delete saved sessions | | !<cmd> | Run shell command | | exit | Quit tunacode | ## LSP Integration (Beta) TunaCode includes experimental Language Server Protocol support for real-time diagnostics. When an LSP server is detected in your PATH, it activates automatically. **Supported languages:** | Language | LSP Server | | ---------- | ----------------------------- | | Python | `ruff server` | | TypeScript | `typescript-language-server` | | JavaScript | `typescript-language-server` | | Go | `gopls` | | Rust | `rust-analyzer` | Diagnostics appear in the UI when editing files. This feature is beta - expect rough edges. ## Discord Server Join our official discord server to receive help, show us how you're using tunacode, and chat about anything LLM. [<img src="https://discord.com/api/guilds/1447688577126367346/widget.png?style=banner3" alt="Discord Banner 3"/>](https://discord.gg/TN7Fpynv6H) ## License MIT
text/markdown
null
larock22 <noreply@github.com>
null
null
MIT
agent, automation, cli, development
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Topic :: Software Development", "Topic :: Utilities" ]
[]
null
null
<3.14,>=3.11
[]
[]
[]
[ "click<8.2.0,>=8.1.0", "defusedxml", "html2text>=2024.2.26", "pathspec>=0.12.1", "prompt-toolkit<4.0.0,>=3.0.52", "pydantic-ai<2.0.0,>=1.18.0", "pydantic<3.0.0,>=2.12.4", "pygments<3.0.0,>=2.19.2", "python-levenshtein>=0.21.0", "rich<15.0.0,>=14.2.0", "ruff>=0.14.0", "textual-autocomplete>=4.0...
[]
[]
[]
[ "Homepage, https://tunacode.xyz/", "Repository, https://github.com/alchemiststudiosDOTai/tunacode", "Issues, https://github.com/alchemiststudiosDOTai/tunacode/issues", "Documentation, https://github.com/alchemiststudiosDOTai/tunacode#readme" ]
twine/6.2.0 CPython/3.12.12
2026-01-16T04:29:35.152088
tunacode_cli-0.1.34.tar.gz
2,302,097
c9/16/4322d48a05eceaa285af333477a926e9a69e5de9272e7b55dd71b7ae0806/tunacode_cli-0.1.34.tar.gz
source
sdist
null
false
9e30b225900c3770f8d7febf4d9f47d6
a1587eebfd6798ae4a5cbfd8f1eb160e2c1fcaeeb130fc268152f68c542147d9
c9164322d48a05eceaa285af333477a926e9a69e5de9272e7b55dd71b7ae0806
null
[ "LICENSE" ]
2.4
geek-cafe-saas-sdk
0.79.0
Base Reusable Services for SaaS
# Geek Cafe Services [![Python 3.13+](https://img.shields.io/badge/python-3.13+-blue.svg)](https://www.python.org/downloads/) [![Version](https://img.shields.io/badge/version-0.78.0-green.svg)](https://github.com/geekcafe/geek-cafe-services) [![DynamoDB](https://img.shields.io/badge/database-DynamoDB-orange.svg)](https://aws.amazon.com/dynamodb/) [![AWS Lambda](https://img.shields.io/badge/runtime-AWS%20Lambda-yellow.svg)](https://aws.amazon.com/lambda/) > **⚠️ Beta Notice**: This library is under active development. Breaking changes may occur until we reach a stable 1.0 release. We recommend pinning to specific versions in production. > **✨ New in v0.78.0**: Automatic security model, simplified handler patterns, and comprehensive pattern guides! <!-- COVERAGE-BADGE:START --> ## Test Coverage ![Tests](https://img.shields.io/badge/tests-1929%20passed-brightgreen) ![Coverage](https://img.shields.io/badge/coverage-78.1%25-yellow) **Overall Coverage:** 78.1% (20284/25964 statements) ### Coverage Summary | Metric | Value | |--------|-------| | Total Statements | 25,964 | | Covered Statements | 20,284 | | Missing Statements | 5,680 | | Coverage Percentage | 78.1% | | Total Tests | 1929 | | Test Status | ✅ All Passing | ### Files Needing Attention (< 80% coverage) | Coverage | Missing Lines | File | |----------|---------------|------| | 0.0% | 2 | `modules/executions/handlers/__init__.py` | | 0.0% | 2 | `modules/executions/handlers/get_status/__init__.py` | | 0.0% | 2 | `modules/feature_flags/models/__init__.py` | | 0.0% | 2 | `modules/feature_flags/services/__init__.py` | | 0.0% | 2 | `modules/file_system/handlers/unarchive/__init__.py` | | 0.0% | 10 | `modules/executions/handlers/get_status/app.py` | | 0.0% | 12 | `modules/file_system/handlers/unarchive/app.py` | | 0.0% | 21 | `core/models/base_async_event_model.py` | | 0.0% | 115 | `modules/feature_flags/models/feature_flag.py` | | 0.0% | 135 | `modules/executions/handlers/workflow_step_handler.py` | *... and 60 more files with < 80% coverage* ### Running Tests ```bash # Run all tests with coverage ./run_unit_tests.sh # View detailed coverage report open reports/coverage/index.html ``` *Last updated: 2026-01-01 15:14:41* --- <!-- COVERAGE-BADGE:END --> ## Description **Geek Cafe Services** is a production-ready, enterprise-grade library that provides reusable database services specifically designed for multi-tenant SaaS applications. Built on top of AWS DynamoDB, this library offers a prescriptive approach to building scalable, maintainable backend services with consistent patterns and best practices. ### Why Geek Cafe Services? 🏗️ **Consistent Architecture**: All services follow the same proven patterns for CRUD operations, error handling, and access control 🔒 **Multi-Tenant by Design**: Built-in tenant isolation ensures secure data separation across customers ⚡ **DynamoDB Optimized**: Leverages DynamoDB's strengths with efficient GSI indexes and query patterns 🛡️ **Production Ready**: Comprehensive error handling, logging, pagination, and batch operations 🧪 **Fully Tested**: 100% test coverage with comprehensive test suites for reliability 📖 **Well Documented**: Extensive documentation with practical examples and best practices ### Perfect For - **SaaS Applications** requiring multi-tenant data isolation - **Serverless Architectures** built on AWS Lambda and DynamoDB - **Teams** wanting consistent, proven patterns across services - **Rapid Development** with pre-built, tested service components ## Installation ```bash # Clone the repository git clone https://github.com/geekcafe/geek-cafe-services.git cd geek-cafe-services # Setup the development environment ./pysetup.sh # Install dependencies pip install -r requirements.txt ``` ## Quick Start ### For Lambda Handlers (Recommended) ```python from geek_cafe_saas_sdk.lambda_handlers import create_handler from geek_cafe_saas_sdk.modules.workflows.services import WorkflowService # Module-level handler for connection pooling handler_wrapper = create_handler( service_class=WorkflowService, convert_request_case=True, # camelCase → snake_case convert_response_case=True # snake_case → camelCase ) def lambda_handler(event, context, injected_service=None): return handler_wrapper.execute(event, context, business_logic, injected_service) def business_logic(event, service): # Your business logic - security and case conversion are automatic! body = event.get("parsed_body") or {} result = service.create(**body) return {"execution_id": result.data.id, "status": "created"} ``` ### For Direct Service Usage ```python from geek_cafe_saas_sdk.modules.messaging.services import MessageService from geek_cafe_saas_sdk.core import AnonymousContextFactory # Create request context context = AnonymousContextFactory.create_test_context( user_id='user_123', tenant_id='tenant_456' ) # Initialize service with context service = MessageService(request_context=context) # Create a message - security is automatic! result = service.create( type="notification", content={"title": "Welcome!", "body": "Thanks for joining."} ) if result.success: print(f"Created message: {result.data.id}") ``` 📖 **[Complete Quick Start Guide](./docs/help/QUICK_START.md)** ## Available Services ### 🚀 Lambda Handler Wrappers (NEW in v0.2.0) **Purpose**: Eliminate 70-80% of boilerplate code in AWS Lambda functions **Key Capabilities**: - ✅ Automatic API key validation from environment - ✅ Request body parsing and camelCase → snake_case conversion - ✅ Service initialization with connection pooling for warm starts - ✅ Built-in CORS and error handling - ✅ User context extraction from authorizers - ✅ Service injection for easy testing - ✅ Support for public and secured endpoints **Available Handlers**: - `ApiKeyLambdaHandler` - API key validation (most common) - `PublicLambdaHandler` - No authentication (config endpoints) - `BaseLambdaHandler` - Extensible base for custom handlers **Quick Example**: ```python from geek_cafe_saas_sdk.lambda_handlers import ApiKeyLambdaHandler from geek_cafe_saas_sdk.vote_service import VoteService # All boilerplate handled in 3 lines handler = ApiKeyLambdaHandler( service_class=VoteService, require_body=True, convert_case=True ) def lambda_handler(event, context): return handler.execute(event, context, create_vote) def create_vote(event, service, user_context): # Just your business logic - everything else is handled! payload = event["parsed_body"] # Already parsed & converted return service.create_vote( tenant_id=user_context.get("tenant_id", "anonymous"), user_id=user_context.get("user_id", "anonymous"), **payload ) ``` **Use Cases**: Any AWS Lambda function with API key auth, reducing code by 70-80% while maintaining all functionality 📖 **[Complete Lambda Handlers Documentation](./docs/lambda_handlers.md)** ### 📧 MessageService **Purpose**: Complete message and notification management system **Key Capabilities**: - ✅ Full CRUD operations with tenant isolation - ✅ Flexible JSON content storage for any message type - ✅ Efficient querying by user, tenant, and message type - ✅ Automatic audit trails and timestamps - ✅ Built-in access control and validation **Use Cases**: User notifications, system alerts, communication logs, announcement management ### 🗳️ Voting Services Suite **Purpose**: Complete voting and rating system with real-time aggregation **Architecture**: Three interconnected services working together: #### VoteService - ✅ Individual vote management with automatic upsert behavior - ✅ One vote per user per target enforcement - ✅ Support for up/down votes or custom vote types - ✅ Comprehensive querying by user, target, and tenant #### VoteSummaryService - ✅ Pre-calculated vote totals for instant retrieval - ✅ Target-based optimization for high-performance lookups - ✅ Metadata tracking (last tallied timestamp, vote counts) - ✅ Tenant-scoped summary management #### VoteTallyService - ✅ Intelligent vote aggregation with pagination support - ✅ Batch processing for multiple targets - ✅ Stale target detection and automated re-tallying - ✅ Comprehensive error handling and resilience **Use Cases**: Product ratings, content voting, feedback systems, community polls, recommendation engines ## Documentation 📖 **[Documentation Index](./docs/DOCUMENTATION_INDEX.md)** - Complete documentation roadmap ### Core Pattern Guides (Start Here) - **[DatabaseService Patterns](./docs/patterns/DATABASE_SERVICE_PATTERNS.md)** ⭐ - CRUD, security, queries, models - **[Lambda Handler Patterns](./docs/patterns/LAMBDA_HANDLER_PATTERNS.md)** ⭐ - API Gateway & SQS handlers - **[Security Architecture](./docs/SECURITY_ARCHITECTURE.md)** - Automatic security model - **[Case Conversion Guide](./docs/CASE_CONVERSION_GUIDE.md)** - camelCase ↔ snake_case ### Module-Specific Guides - **[File System](./docs/file-system/README_FILE_SYSTEM_SDK_USAGE.md)** - File management - **[Workflows](./src/geek_cafe_saas_sdk/modules/workflows/)** - Execution & step models - **[Messaging](./docs/api/CHAT_API.md)** - Chat & contact threads - **[Events](./docs/events/EVENTS_DOMAIN.md)** - Event management - **[Analytics](./docs/analytics/website_analytics_readme.md)** - Analytics tracking ### Configuration & Architecture - **[Configuration Guide](./docs/CONFIG.md)** - Environment variables - **[Architecture Overview](./docs/ARCHITECTURE.md)** - System design - **[Service Pool Telemetry](./docs/SERVICE_POOL_TELEMETRY.md)** - Connection pooling ## Core Features ### 🏛️ **Enterprise Architecture** - **Multi-Tenant by Design**: Complete tenant isolation with automatic access control - **Consistent Patterns**: All services follow identical CRUD interfaces and conventions - **Scalable Design**: Built for high-throughput, multi-customer SaaS applications ### 🔧 **Developer Experience** - **Type Safety**: Full Python type hints for better IDE support and fewer bugs - **Comprehensive Testing**: 100% test coverage with realistic test scenarios - **Rich Documentation**: Detailed API docs, examples, and best practices - **Easy Integration**: Simple initialization and consistent error handling ### ⚡ **Performance & Reliability** - **DynamoDB Optimized**: Efficient GSI indexes and query patterns for fast operations - **Pagination Support**: Handle large datasets without memory issues - **Batch Operations**: Process multiple items efficiently - **Error Resilience**: Graceful handling of partial failures and edge cases ### 🛡️ **Production Ready** - **Structured Logging**: AWS Lambda Powertools integration for observability - **Comprehensive Validation**: Input validation with detailed error messages - **Access Control**: Automatic tenant and user-based security enforcement - **Audit Trails**: Complete tracking of who did what and when ## Environment Setup ```bash # Required environment variables export DYNAMODB_TABLE_NAME=your_table_name # Optional AWS configuration (if not using IAM roles) export AWS_REGION=us-east-1 export AWS_ACCESS_KEY_ID=your_access_key export AWS_SECRET_ACCESS_KEY=your_secret_key ``` ## Testing ```bash # Run all tests pytest tests/ -v # Run specific service tests pytest tests/test_message_service.py -v pytest tests/test_vote_*_service.py -v # Run with coverage pytest tests/ --cov=geek_cafe_saas_sdk --cov-report=html ``` ## Project Structure ``` geek-cafe-services/ ├── src/geek_cafe_saas_sdk/ │ ├── lambda_handlers/ # 🆕 Lambda handler wrappers (v0.2.0) │ │ ├── base.py # Base handler with common functionality │ │ ├── api_key_handler.py # API key validation handler │ │ ├── public_handler.py # Public (no auth) handler │ │ └── service_pool.py # Service connection pooling │ ├── middleware/ # CORS, auth, error handling decorators │ ├── utilities/ # Request/response helpers │ ├── models/ # Data models with DynamoDB mapping │ ├── *_service.py # Service implementations │ ├── database_service.py # Base service class │ └── service_result.py # Standardized response wrapper ├── tests/ # Comprehensive test suite ├── docs/ # Detailed documentation │ └── lambda_handlers.md # 🆕 Lambda wrapper documentation ├── examples/ # Working code examples │ └── lambda_handlers/ # 🆕 Handler examples └── README.md # This file ``` ## Contributing We welcome contributions! Here's how to get started: 1. **Fork the repository** and create a feature branch 2. **Follow the existing patterns** - consistency is key 3. **Add comprehensive tests** for any new functionality 4. **Update documentation** for API changes 5. **Submit a Pull Request** with a clear description ### Development Guidelines - Follow existing code style and patterns - Maintain 100% test coverage for new code - Update documentation for any API changes - Use meaningful commit messages - Test against multiple Python versions if possible ## License This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. ## Support - 📖 **Documentation**: [Complete docs](./docs/services_overview.md) - 🐛 **Bug Reports**: [GitHub Issues](https://github.com/geekcafe/geek-cafe-services/issues) - 💡 **Feature Requests**: [GitHub Discussions](https://github.com/geekcafe/geek-cafe-services/discussions) - 📧 **Questions**: Create an issue with the "question" label --- **Built with ❤️ for the SaaS development community**
text/markdown
null
Eric Wilson <eric.wilson@geekcafe.com>
null
null
Geek Cafe Services Business Source License 1.0 Copyright (c) 2025 Geek Cafe, LLC. All rights reserved. The "Geek Cafe Services" software (the "Software") is made available under this Business Source License (the "License"). This License allows you to view, study, and modify the source code, and to use it for personal, educational, research, or non-commercial purposes, subject to the following terms. 1. Grant of Rights a. You may copy and modify the Software for your own personal, educational, or internal development use. b. You may not use the Software, or modified versions of it, to provide any commercial service or product, including software-as-a-service, consulting, hosting, or resale, without a separate commercial license from Geek Cafe LLC. 2. Change Date Three (3) years from the date of first public release of a specific version of the Software (the “Change Date”), that version will automatically be made available under the Apache License 2.0. Later versions may have different Change Dates. 3. Attribution All copies or substantial portions of the Software must retain this License text, the copyright notice above, and a clear reference to the original source repository (https://github.com/geekcafe/geek-cafe-services). 4. Trademarks The names “Geek Cafe”, “Geek Cafe Services”, and any related logos are trademarks of Geek Cafe LLC and may not be used to endorse or promote derivative products without prior written permission. 5. Disclaimer of Warranty THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NONINFRINGEMENT. 6. Limitation of Liability IN NO EVENT SHALL GEEK Cafe LLC OR CONTRIBUTORS BE LIABLE FOR ANY CLAIM, DAMAGES, OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT, OR OTHERWISE, ARISING FROM OR IN CONNECTION WITH THE SOFTWARE OR ITS USE. --- For commercial licensing inquiries, contact: legal@geekcafe.com
api gateway, aws, dynamodb, lambda, saas, serverless, services
[ "Development Status :: 4 - Beta", "Framework :: AWS CDK", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "P...
[]
null
null
>=3.8
[]
[]
[]
[ "boto3-assist", "build; extra == \"dev\"", "twine; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/geek-cafe-saas-sdk", "Documentation, https://github.com/geek-cafe-saas-sdk/blob/main/README.md", "Source Code, https://github.com/geek-cafe-saas-sdk" ]
twine/6.2.0 CPython/3.13.2
2026-01-16T04:29:53.155260
geek_cafe_saas_sdk-0.79.0-py3-none-any.whl
686,659
f2/c1/33a9c58d53ff186680b5107079349604eb768179c000ccd76a6ee57d06ac/geek_cafe_saas_sdk-0.79.0-py3-none-any.whl
py3
bdist_wheel
null
false
6ef0a7a507f6a1a10eb27f038c1fb04d
b3167b519fb27b4184e39bbae6bcb72b0282272727906e06b090ea93c3356e92
f2c133a9c58d53ff186680b5107079349604eb768179c000ccd76a6ee57d06ac
null
[ "LICENSE" ]
2.4
feldera
0.223.0
The feldera python client
# Feldera Python SDK The `feldera` Python package is the Python client for the Feldera HTTP API. The Python SDK documentation is available at: https://docs.feldera.com/python ## Getting started ### Installation ```bash uv pip install feldera ``` ### Example usage The Python client interacts with the API server of the Feldera instance. ```python # File: example.py from feldera import FelderaClient, PipelineBuilder, Pipeline # Instantiate client client = FelderaClient() # Default: http://localhost:8080 without authentication # client = FelderaClient(url="https://localhost:8080", api_key="apikey:...", requests_verify="/path/to/tls.crt") # (Re)create pipeline name = "example" sql = """ CREATE TABLE t1 (i1 INT) WITH ('materialized' = 'true'); CREATE MATERIALIZED VIEW v1 AS SELECT * FROM t1; """ print("(Re)creating pipeline...") pipeline = PipelineBuilder(client, name, sql).create_or_replace() pipeline.start() print(f"Pipeline status: {pipeline.status()}") pipeline.pause() print(f"Pipeline status: {pipeline.status()}") pipeline.stop(force=True) # Find existing pipeline pipeline = Pipeline.get(name, client) pipeline.start() print(f"Pipeline status: {pipeline.status()}") pipeline.stop(force=True) pipeline.clear_storage() ``` Run using: ```bash uv run python example.py ``` ### Environment variables Some default parameter values in the Python SDK can be overridden via environment variables. **Environment variables for `FelderaClient(...)`** ```bash export FELDERA_HOST="https://localhost:8080" # Overrides default for `url` export FELDERA_API_KEY="apikey:..." # Overrides default for `api_key` # The following together override default for `requests_verify` # export FELDERA_TLS_INSECURE="false" # If set to "1", "true" or "yes" (all case-insensitive), disables TLS certificate verification # export FELDERA_HTTPS_TLS_CERT="/path/to/tls.crt" # Custom TLS certificate ``` **Environment variables for `PipelineBuilder(...)`** ```bash export FELDERA_RUNTIME_VERSION="..." # Overrides default for `runtime_version` ``` ## Development Development assumes you have cloned the Feldera code repository. ### Installation ```bash cd python # Optional: create and activate virtual environment if you don't have one uv venv source .venv/bin/activate # Install in editable mode uv pip install -e . ``` ### Formatting Formatting requires the `ruff` package: `uv pip install ruff` ```bash cd python ruff check ruff format ``` ### Tests Running the test requires the `pytest` package: `uv pip install pytest` ```bash # All tests cd python uv run python -m pytest tests/ # Specific tests directory uv run python -m pytest tests/platform/ # Specific test file uv run python -m pytest tests/platform/test_pipeline_crud.py # Tip: add argument -x at the end for it to fail fast ``` For further information about the tests, please see `tests/README.md`. ### Documentation Building documentation requires the `sphinx` package: `uv pip install sphinx` ```bash cd python/docs sphinx-apidoc -o . ../feldera make html make clean # Cleanup afterwards ``` ### Installation from GitHub Latest `main` branch: ```bash uv pip install git+https://github.com/feldera/feldera#subdirectory=python ``` Different branch (replace `BRANCH_NAME`): ```bash uv pip install git+https://github.com/feldera/feldera@BRANCH_NAME#subdirectory=python ```
text/markdown
null
Feldera Team <dev@feldera.com>
null
null
null
feldera, python
[ "Programming Language :: Python :: 3.10", "Operating System :: OS Independent" ]
[]
null
null
>=3.10
[]
[]
[]
[ "requests", "pandas>=2.1.2", "typing-extensions", "numpy>=2.2.4", "pretty-errors", "ruff>=0.6.9", "PyJWT>=2.8.0" ]
[]
[]
[]
[ "Homepage, https://www.feldera.com", "Documentation, https://docs.feldera.com/python", "Repository, https://github.com/feldera/feldera", "Issues, https://github.com/feldera/feldera/issues" ]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:29:53.258109
feldera-0.223.0-py3-none-any.whl
50,344
d8/8e/f04724b97d6581dba1044d0fe81a873b11159696bd66ef95de1c0cdb9959/feldera-0.223.0-py3-none-any.whl
py3
bdist_wheel
null
false
ed5e893a3359f011b31f83965e1b6652
1fc3e50cccef97e738747e4440c1325db519f051c1e56da357d2391b4c730877
d88ef04724b97d6581dba1044d0fe81a873b11159696bd66ef95de1c0cdb9959
MIT
[]
2.4
geek-cafe-saas-sdk
0.79.0
Base Reusable Services for SaaS
# Geek Cafe Services [![Python 3.13+](https://img.shields.io/badge/python-3.13+-blue.svg)](https://www.python.org/downloads/) [![Version](https://img.shields.io/badge/version-0.78.0-green.svg)](https://github.com/geekcafe/geek-cafe-services) [![DynamoDB](https://img.shields.io/badge/database-DynamoDB-orange.svg)](https://aws.amazon.com/dynamodb/) [![AWS Lambda](https://img.shields.io/badge/runtime-AWS%20Lambda-yellow.svg)](https://aws.amazon.com/lambda/) > **⚠️ Beta Notice**: This library is under active development. Breaking changes may occur until we reach a stable 1.0 release. We recommend pinning to specific versions in production. > **✨ New in v0.78.0**: Automatic security model, simplified handler patterns, and comprehensive pattern guides! <!-- COVERAGE-BADGE:START --> ## Test Coverage ![Tests](https://img.shields.io/badge/tests-1929%20passed-brightgreen) ![Coverage](https://img.shields.io/badge/coverage-78.1%25-yellow) **Overall Coverage:** 78.1% (20284/25964 statements) ### Coverage Summary | Metric | Value | |--------|-------| | Total Statements | 25,964 | | Covered Statements | 20,284 | | Missing Statements | 5,680 | | Coverage Percentage | 78.1% | | Total Tests | 1929 | | Test Status | ✅ All Passing | ### Files Needing Attention (< 80% coverage) | Coverage | Missing Lines | File | |----------|---------------|------| | 0.0% | 2 | `modules/executions/handlers/__init__.py` | | 0.0% | 2 | `modules/executions/handlers/get_status/__init__.py` | | 0.0% | 2 | `modules/feature_flags/models/__init__.py` | | 0.0% | 2 | `modules/feature_flags/services/__init__.py` | | 0.0% | 2 | `modules/file_system/handlers/unarchive/__init__.py` | | 0.0% | 10 | `modules/executions/handlers/get_status/app.py` | | 0.0% | 12 | `modules/file_system/handlers/unarchive/app.py` | | 0.0% | 21 | `core/models/base_async_event_model.py` | | 0.0% | 115 | `modules/feature_flags/models/feature_flag.py` | | 0.0% | 135 | `modules/executions/handlers/workflow_step_handler.py` | *... and 60 more files with < 80% coverage* ### Running Tests ```bash # Run all tests with coverage ./run_unit_tests.sh # View detailed coverage report open reports/coverage/index.html ``` *Last updated: 2026-01-01 15:14:41* --- <!-- COVERAGE-BADGE:END --> ## Description **Geek Cafe Services** is a production-ready, enterprise-grade library that provides reusable database services specifically designed for multi-tenant SaaS applications. Built on top of AWS DynamoDB, this library offers a prescriptive approach to building scalable, maintainable backend services with consistent patterns and best practices. ### Why Geek Cafe Services? 🏗️ **Consistent Architecture**: All services follow the same proven patterns for CRUD operations, error handling, and access control 🔒 **Multi-Tenant by Design**: Built-in tenant isolation ensures secure data separation across customers ⚡ **DynamoDB Optimized**: Leverages DynamoDB's strengths with efficient GSI indexes and query patterns 🛡️ **Production Ready**: Comprehensive error handling, logging, pagination, and batch operations 🧪 **Fully Tested**: 100% test coverage with comprehensive test suites for reliability 📖 **Well Documented**: Extensive documentation with practical examples and best practices ### Perfect For - **SaaS Applications** requiring multi-tenant data isolation - **Serverless Architectures** built on AWS Lambda and DynamoDB - **Teams** wanting consistent, proven patterns across services - **Rapid Development** with pre-built, tested service components ## Installation ```bash # Clone the repository git clone https://github.com/geekcafe/geek-cafe-services.git cd geek-cafe-services # Setup the development environment ./pysetup.sh # Install dependencies pip install -r requirements.txt ``` ## Quick Start ### For Lambda Handlers (Recommended) ```python from geek_cafe_saas_sdk.lambda_handlers import create_handler from geek_cafe_saas_sdk.modules.workflows.services import WorkflowService # Module-level handler for connection pooling handler_wrapper = create_handler( service_class=WorkflowService, convert_request_case=True, # camelCase → snake_case convert_response_case=True # snake_case → camelCase ) def lambda_handler(event, context, injected_service=None): return handler_wrapper.execute(event, context, business_logic, injected_service) def business_logic(event, service): # Your business logic - security and case conversion are automatic! body = event.get("parsed_body") or {} result = service.create(**body) return {"execution_id": result.data.id, "status": "created"} ``` ### For Direct Service Usage ```python from geek_cafe_saas_sdk.modules.messaging.services import MessageService from geek_cafe_saas_sdk.core import AnonymousContextFactory # Create request context context = AnonymousContextFactory.create_test_context( user_id='user_123', tenant_id='tenant_456' ) # Initialize service with context service = MessageService(request_context=context) # Create a message - security is automatic! result = service.create( type="notification", content={"title": "Welcome!", "body": "Thanks for joining."} ) if result.success: print(f"Created message: {result.data.id}") ``` 📖 **[Complete Quick Start Guide](./docs/help/QUICK_START.md)** ## Available Services ### 🚀 Lambda Handler Wrappers (NEW in v0.2.0) **Purpose**: Eliminate 70-80% of boilerplate code in AWS Lambda functions **Key Capabilities**: - ✅ Automatic API key validation from environment - ✅ Request body parsing and camelCase → snake_case conversion - ✅ Service initialization with connection pooling for warm starts - ✅ Built-in CORS and error handling - ✅ User context extraction from authorizers - ✅ Service injection for easy testing - ✅ Support for public and secured endpoints **Available Handlers**: - `ApiKeyLambdaHandler` - API key validation (most common) - `PublicLambdaHandler` - No authentication (config endpoints) - `BaseLambdaHandler` - Extensible base for custom handlers **Quick Example**: ```python from geek_cafe_saas_sdk.lambda_handlers import ApiKeyLambdaHandler from geek_cafe_saas_sdk.vote_service import VoteService # All boilerplate handled in 3 lines handler = ApiKeyLambdaHandler( service_class=VoteService, require_body=True, convert_case=True ) def lambda_handler(event, context): return handler.execute(event, context, create_vote) def create_vote(event, service, user_context): # Just your business logic - everything else is handled! payload = event["parsed_body"] # Already parsed & converted return service.create_vote( tenant_id=user_context.get("tenant_id", "anonymous"), user_id=user_context.get("user_id", "anonymous"), **payload ) ``` **Use Cases**: Any AWS Lambda function with API key auth, reducing code by 70-80% while maintaining all functionality 📖 **[Complete Lambda Handlers Documentation](./docs/lambda_handlers.md)** ### 📧 MessageService **Purpose**: Complete message and notification management system **Key Capabilities**: - ✅ Full CRUD operations with tenant isolation - ✅ Flexible JSON content storage for any message type - ✅ Efficient querying by user, tenant, and message type - ✅ Automatic audit trails and timestamps - ✅ Built-in access control and validation **Use Cases**: User notifications, system alerts, communication logs, announcement management ### 🗳️ Voting Services Suite **Purpose**: Complete voting and rating system with real-time aggregation **Architecture**: Three interconnected services working together: #### VoteService - ✅ Individual vote management with automatic upsert behavior - ✅ One vote per user per target enforcement - ✅ Support for up/down votes or custom vote types - ✅ Comprehensive querying by user, target, and tenant #### VoteSummaryService - ✅ Pre-calculated vote totals for instant retrieval - ✅ Target-based optimization for high-performance lookups - ✅ Metadata tracking (last tallied timestamp, vote counts) - ✅ Tenant-scoped summary management #### VoteTallyService - ✅ Intelligent vote aggregation with pagination support - ✅ Batch processing for multiple targets - ✅ Stale target detection and automated re-tallying - ✅ Comprehensive error handling and resilience **Use Cases**: Product ratings, content voting, feedback systems, community polls, recommendation engines ## Documentation 📖 **[Documentation Index](./docs/DOCUMENTATION_INDEX.md)** - Complete documentation roadmap ### Core Pattern Guides (Start Here) - **[DatabaseService Patterns](./docs/patterns/DATABASE_SERVICE_PATTERNS.md)** ⭐ - CRUD, security, queries, models - **[Lambda Handler Patterns](./docs/patterns/LAMBDA_HANDLER_PATTERNS.md)** ⭐ - API Gateway & SQS handlers - **[Security Architecture](./docs/SECURITY_ARCHITECTURE.md)** - Automatic security model - **[Case Conversion Guide](./docs/CASE_CONVERSION_GUIDE.md)** - camelCase ↔ snake_case ### Module-Specific Guides - **[File System](./docs/file-system/README_FILE_SYSTEM_SDK_USAGE.md)** - File management - **[Workflows](./src/geek_cafe_saas_sdk/modules/workflows/)** - Execution & step models - **[Messaging](./docs/api/CHAT_API.md)** - Chat & contact threads - **[Events](./docs/events/EVENTS_DOMAIN.md)** - Event management - **[Analytics](./docs/analytics/website_analytics_readme.md)** - Analytics tracking ### Configuration & Architecture - **[Configuration Guide](./docs/CONFIG.md)** - Environment variables - **[Architecture Overview](./docs/ARCHITECTURE.md)** - System design - **[Service Pool Telemetry](./docs/SERVICE_POOL_TELEMETRY.md)** - Connection pooling ## Core Features ### 🏛️ **Enterprise Architecture** - **Multi-Tenant by Design**: Complete tenant isolation with automatic access control - **Consistent Patterns**: All services follow identical CRUD interfaces and conventions - **Scalable Design**: Built for high-throughput, multi-customer SaaS applications ### 🔧 **Developer Experience** - **Type Safety**: Full Python type hints for better IDE support and fewer bugs - **Comprehensive Testing**: 100% test coverage with realistic test scenarios - **Rich Documentation**: Detailed API docs, examples, and best practices - **Easy Integration**: Simple initialization and consistent error handling ### ⚡ **Performance & Reliability** - **DynamoDB Optimized**: Efficient GSI indexes and query patterns for fast operations - **Pagination Support**: Handle large datasets without memory issues - **Batch Operations**: Process multiple items efficiently - **Error Resilience**: Graceful handling of partial failures and edge cases ### 🛡️ **Production Ready** - **Structured Logging**: AWS Lambda Powertools integration for observability - **Comprehensive Validation**: Input validation with detailed error messages - **Access Control**: Automatic tenant and user-based security enforcement - **Audit Trails**: Complete tracking of who did what and when ## Environment Setup ```bash # Required environment variables export DYNAMODB_TABLE_NAME=your_table_name # Optional AWS configuration (if not using IAM roles) export AWS_REGION=us-east-1 export AWS_ACCESS_KEY_ID=your_access_key export AWS_SECRET_ACCESS_KEY=your_secret_key ``` ## Testing ```bash # Run all tests pytest tests/ -v # Run specific service tests pytest tests/test_message_service.py -v pytest tests/test_vote_*_service.py -v # Run with coverage pytest tests/ --cov=geek_cafe_saas_sdk --cov-report=html ``` ## Project Structure ``` geek-cafe-services/ ├── src/geek_cafe_saas_sdk/ │ ├── lambda_handlers/ # 🆕 Lambda handler wrappers (v0.2.0) │ │ ├── base.py # Base handler with common functionality │ │ ├── api_key_handler.py # API key validation handler │ │ ├── public_handler.py # Public (no auth) handler │ │ └── service_pool.py # Service connection pooling │ ├── middleware/ # CORS, auth, error handling decorators │ ├── utilities/ # Request/response helpers │ ├── models/ # Data models with DynamoDB mapping │ ├── *_service.py # Service implementations │ ├── database_service.py # Base service class │ └── service_result.py # Standardized response wrapper ├── tests/ # Comprehensive test suite ├── docs/ # Detailed documentation │ └── lambda_handlers.md # 🆕 Lambda wrapper documentation ├── examples/ # Working code examples │ └── lambda_handlers/ # 🆕 Handler examples └── README.md # This file ``` ## Contributing We welcome contributions! Here's how to get started: 1. **Fork the repository** and create a feature branch 2. **Follow the existing patterns** - consistency is key 3. **Add comprehensive tests** for any new functionality 4. **Update documentation** for API changes 5. **Submit a Pull Request** with a clear description ### Development Guidelines - Follow existing code style and patterns - Maintain 100% test coverage for new code - Update documentation for any API changes - Use meaningful commit messages - Test against multiple Python versions if possible ## License This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. ## Support - 📖 **Documentation**: [Complete docs](./docs/services_overview.md) - 🐛 **Bug Reports**: [GitHub Issues](https://github.com/geekcafe/geek-cafe-services/issues) - 💡 **Feature Requests**: [GitHub Discussions](https://github.com/geekcafe/geek-cafe-services/discussions) - 📧 **Questions**: Create an issue with the "question" label --- **Built with ❤️ for the SaaS development community**
text/markdown
null
Eric Wilson <eric.wilson@geekcafe.com>
null
null
Geek Cafe Services Business Source License 1.0 Copyright (c) 2025 Geek Cafe, LLC. All rights reserved. The "Geek Cafe Services" software (the "Software") is made available under this Business Source License (the "License"). This License allows you to view, study, and modify the source code, and to use it for personal, educational, research, or non-commercial purposes, subject to the following terms. 1. Grant of Rights a. You may copy and modify the Software for your own personal, educational, or internal development use. b. You may not use the Software, or modified versions of it, to provide any commercial service or product, including software-as-a-service, consulting, hosting, or resale, without a separate commercial license from Geek Cafe LLC. 2. Change Date Three (3) years from the date of first public release of a specific version of the Software (the “Change Date”), that version will automatically be made available under the Apache License 2.0. Later versions may have different Change Dates. 3. Attribution All copies or substantial portions of the Software must retain this License text, the copyright notice above, and a clear reference to the original source repository (https://github.com/geekcafe/geek-cafe-services). 4. Trademarks The names “Geek Cafe”, “Geek Cafe Services”, and any related logos are trademarks of Geek Cafe LLC and may not be used to endorse or promote derivative products without prior written permission. 5. Disclaimer of Warranty THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NONINFRINGEMENT. 6. Limitation of Liability IN NO EVENT SHALL GEEK Cafe LLC OR CONTRIBUTORS BE LIABLE FOR ANY CLAIM, DAMAGES, OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT, OR OTHERWISE, ARISING FROM OR IN CONNECTION WITH THE SOFTWARE OR ITS USE. --- For commercial licensing inquiries, contact: legal@geekcafe.com
api gateway, aws, dynamodb, lambda, saas, serverless, services
[ "Development Status :: 4 - Beta", "Framework :: AWS CDK", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "P...
[]
null
null
>=3.8
[]
[]
[]
[ "boto3-assist", "build; extra == \"dev\"", "twine; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/geek-cafe-saas-sdk", "Documentation, https://github.com/geek-cafe-saas-sdk/blob/main/README.md", "Source Code, https://github.com/geek-cafe-saas-sdk" ]
twine/6.2.0 CPython/3.13.2
2026-01-16T04:29:54.765798
geek_cafe_saas_sdk-0.79.0.tar.gz
533,427
a7/d2/8beb64d2934992dc7a469ed460f5cd55b52801cb7fdd913ba8da6fa2b8c2/geek_cafe_saas_sdk-0.79.0.tar.gz
source
sdist
null
false
bfa271a909c0191ee3249c5e3d6e7a0f
416f8fde55a259aabfb329ca1bcb08a514240aa7cc5a5daf8adf947cbc7d6401
a7d28beb64d2934992dc7a469ed460f5cd55b52801cb7fdd913ba8da6fa2b8c2
null
[ "LICENSE" ]
2.4
feldera
0.223.0
The feldera python client
# Feldera Python SDK The `feldera` Python package is the Python client for the Feldera HTTP API. The Python SDK documentation is available at: https://docs.feldera.com/python ## Getting started ### Installation ```bash uv pip install feldera ``` ### Example usage The Python client interacts with the API server of the Feldera instance. ```python # File: example.py from feldera import FelderaClient, PipelineBuilder, Pipeline # Instantiate client client = FelderaClient() # Default: http://localhost:8080 without authentication # client = FelderaClient(url="https://localhost:8080", api_key="apikey:...", requests_verify="/path/to/tls.crt") # (Re)create pipeline name = "example" sql = """ CREATE TABLE t1 (i1 INT) WITH ('materialized' = 'true'); CREATE MATERIALIZED VIEW v1 AS SELECT * FROM t1; """ print("(Re)creating pipeline...") pipeline = PipelineBuilder(client, name, sql).create_or_replace() pipeline.start() print(f"Pipeline status: {pipeline.status()}") pipeline.pause() print(f"Pipeline status: {pipeline.status()}") pipeline.stop(force=True) # Find existing pipeline pipeline = Pipeline.get(name, client) pipeline.start() print(f"Pipeline status: {pipeline.status()}") pipeline.stop(force=True) pipeline.clear_storage() ``` Run using: ```bash uv run python example.py ``` ### Environment variables Some default parameter values in the Python SDK can be overridden via environment variables. **Environment variables for `FelderaClient(...)`** ```bash export FELDERA_HOST="https://localhost:8080" # Overrides default for `url` export FELDERA_API_KEY="apikey:..." # Overrides default for `api_key` # The following together override default for `requests_verify` # export FELDERA_TLS_INSECURE="false" # If set to "1", "true" or "yes" (all case-insensitive), disables TLS certificate verification # export FELDERA_HTTPS_TLS_CERT="/path/to/tls.crt" # Custom TLS certificate ``` **Environment variables for `PipelineBuilder(...)`** ```bash export FELDERA_RUNTIME_VERSION="..." # Overrides default for `runtime_version` ``` ## Development Development assumes you have cloned the Feldera code repository. ### Installation ```bash cd python # Optional: create and activate virtual environment if you don't have one uv venv source .venv/bin/activate # Install in editable mode uv pip install -e . ``` ### Formatting Formatting requires the `ruff` package: `uv pip install ruff` ```bash cd python ruff check ruff format ``` ### Tests Running the test requires the `pytest` package: `uv pip install pytest` ```bash # All tests cd python uv run python -m pytest tests/ # Specific tests directory uv run python -m pytest tests/platform/ # Specific test file uv run python -m pytest tests/platform/test_pipeline_crud.py # Tip: add argument -x at the end for it to fail fast ``` For further information about the tests, please see `tests/README.md`. ### Documentation Building documentation requires the `sphinx` package: `uv pip install sphinx` ```bash cd python/docs sphinx-apidoc -o . ../feldera make html make clean # Cleanup afterwards ``` ### Installation from GitHub Latest `main` branch: ```bash uv pip install git+https://github.com/feldera/feldera#subdirectory=python ``` Different branch (replace `BRANCH_NAME`): ```bash uv pip install git+https://github.com/feldera/feldera@BRANCH_NAME#subdirectory=python ```
text/markdown
null
Feldera Team <dev@feldera.com>
null
null
null
feldera, python
[ "Programming Language :: Python :: 3.10", "Operating System :: OS Independent" ]
[]
null
null
>=3.10
[]
[]
[]
[ "requests", "pandas>=2.1.2", "typing-extensions", "numpy>=2.2.4", "pretty-errors", "ruff>=0.6.9", "PyJWT>=2.8.0" ]
[]
[]
[]
[ "Homepage, https://www.feldera.com", "Documentation, https://docs.feldera.com/python", "Repository, https://github.com/feldera/feldera", "Issues, https://github.com/feldera/feldera/issues" ]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:29:56.169026
feldera-0.223.0.tar.gz
45,035
36/50/c4076a1d47b326e2469aa89074e1790231e18a068bc59777b95df172a100/feldera-0.223.0.tar.gz
source
sdist
null
false
c603f1e4606ae8e56983bf03e1fb19ab
9a1bcf529e4b86153c849cdf8a6c96b4747683af12c0dd8b43ab90b9b3f04162
3650c4076a1d47b326e2469aa89074e1790231e18a068bc59777b95df172a100
MIT
[]
2.4
hh-applicant-tool
1.5.3
HH-Applicant-Tool: An automation utility for HeadHunter (hh.ru) designed to streamline the job search process by auto-applying to relevant vacancies and periodically refreshing resumes to stay at the top of recruiter searches.
# HH Applicant Tool > Ищу почасовую или проектную [@feedback_s3rgeym_bot](https://t.me/feedback_s3rgeym_bot) (Python, Vue.js, Devops) ![Publish to PyPI](https://github.com/s3rgeym/hh-applicant-tool/actions/workflows/publish.yml/badge.svg) [![PyPi Version](https://img.shields.io/pypi/v/hh-applicant-tool)]() [![Python Versions](https://img.shields.io/pypi/pyversions/hh-applicant-tool.svg)]() [![GitHub code size in bytes](https://img.shields.io/github/languages/code-size/s3rgeym/hh-applicant-tool)]() [![PyPI - Downloads](https://img.shields.io/pypi/dm/hh-applicant-tool)]() [![Total Downloads](https://static.pepy.tech/badge/hh-applicant-tool)]() <div align="center"> <img src="https://github.com/user-attachments/assets/29d91490-2c83-4e3f-a573-c7a6182a4044" width="500"> </div> ### ☕ Поддержать проект [![Donate BTC](https://img.shields.io/badge/Donate-BTC-orange?style=for-the-badge&logo=bitcoin&logoColor=white)](bitcoin:BC1QWQXZX6D5Q0J5QVGH2VYXTFXX9Y6EPPGCW3REHS?label=%D0%94%D0%BB%D1%8F%20%D0%BF%D0%BE%D0%B6%D0%B5%D1%80%D1%82%D0%B2%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B9) **BTC Address:** `BC1QWQXZX6D5Q0J5QVGH2VYXTFXX9Y6EPPGCW3REHS` --- ## ✨ Ключевые преимущества - 💸 **Полностью бесплатно.** В то время как сервисы в интернете или Telegram с аналогичным функционалом просят за свои услуги от 5.000 до 12.000 рублей в месяц. - 🔒 **Безопасность личных данных.** Ваши email, телефон, пароль и другие личные данные никуда не передаются в отличие от сторонних сервисов. В этом можно убедиться, изучив [открытый исходный код](https://github.com/s3rgeym/hh-applicant-tool/tree/main/src/hh_applicant_tool). Владельцы сторонних сервисов никогда вам не покажут исходники. Они знают о вас все и эти данные спокойно продадут каким-нибудь жуликам, либо те утекут в результате взлома. - 💾 **Сохранение контактов и прочей информации.** Контакты работодалей и информация о них и их вакансиях сохраняется в базе данных, что позволяет производить быстрый поиск нужной информации в отличии от сайта при минимальном опыте с SQL (язык запросов, придуманный в свое время для домохозяек). - 🛡️ **Гарантия от блокировок.** Утилита выполняет запросы с вашего устройства, имитируя обычного пользователя. Сервисы рассылают запросы для сотен аккаунтов с одного сервера, что повышает вероятность блокировки вашего аккаунта до 100%. - 😎 **Простота в использовании.** С утилитой разберется любой начинающий пользователь компьютера. Настолько, что херки в своем чате уже жалуются на массовые отклики от подростков 14-17 лет, которые успешно освоили данный инструмент. - 👯 **Мультиаккаунтность и управление резюме.** Утилита благодаря профилям может работать с неограниченным количеством аккаунтов и резюме. - 🖥️ **Полноценный CLI и работа на серверах.** Утилита имеет чистый консольный интерфейс. Несмотря на то, что для обхода защиты при авторизации используется браузер, он работает по умолчанию в фоновом (headless) режиме. Для `hh-applicant-tool` не нужна видеокарта или графическая оболочка (X-сервер), что позволяет авторизоваться даже с сервера или из докер-контейнера. - 🚀 **Скриптинг.** Вы можете использовать утилиту из своих Python-скриптов. - 🤖 **Борьба с ATS и HR.** Россиянские говнокомпании внедрили ATS с нейронками, которые отклоняют отклик в течение 5 секунд. Отказ может прийти даже из-за отсутствия одного ключевого слова в резюме. Это не говоря уже о тупопездом фильтре, отсеивающем по знаку зодика (они не любят козерогов!!!). Это обесценивает ваши усилия на написание сопроводительных писем и чтение бесконечных портянок бреда, сгенерированных нейронками по запросу каких-то дур. Если тупые ичары решили себя не утруждать чтением резюме (они сейчас и сами перестали писать), то и вам незачем читать высеры этих филологинь и психологинь. Утилита избавляет вас от этой рутины, превращающей поиск работы в полноценную работу. Сейчас доля отказов составляет 98-99%, включая "молчунов" и прочих долбоебов, и единственный способ увеличить шансы просто попасть на собеседование — это автоматическая рассылка откликов на все подходящие вакансии. У большинства телефоны с двумя симками, а значит каждый может разослать до 400 откликов в сутки, а если нарегать акков на родню — еще больше! --- ## Содержание - [HH Applicant Tool](#hh-applicant-tool) - [☕ Поддержать проект](#-поддержать-проект) - [✨ Ключевые преимущества](#-ключевые-преимущества) - [Содержание](#содержание) - [Описание](#описание) - [Предыстория](#предыстория) - [Запуск через Docker](#запуск-через-docker) - [Стандартная установка](#стандартная-установка) - [Установка утилиты](#установка-утилиты) - [Дополнительные зависимости](#дополнительные-зависимости) - [Авторизация](#авторизация) - [Описание команд](#описание-команд) - [Использование AI](#использование-ai) - [OpenAI/ChatGPT](#openaichatgpt) - [Шаблоны сообщений](#шаблоны-сообщений) - [Данные приложения](#данные-приложения) - [Конфигурационный файл](#конфигурационный-файл) - [Логи](#логи) - [База данных](#база-данных) - [Использование в скриптах](#использование-в-скриптах) - [Дополнительные настройки](#дополнительные-настройки) - [Лицензионное соглашение (Limited Non-Commercial License)](#лицензионное-соглашение-limited-non-commercial-license) --- ## Описание > Данной утилите похуй на "запрет" доступа к API HH сторонним приложениям, так как она прикидывается официальным приложением под Android > Утилита для генерации сопроводительного письма может использовать AI, в т. ч. ChatGPT. Подробное описание ниже Утилита для успешных волчат и старых волков с опытом, служащая для автоматизации действий на HH.RU, таких как рассылка откликов на подходящие вакансии и обновление всех резюме (бесплатный аналог услуги на HH). Утилита локально хранит информацию об откликах, в т. ч. полученные контакты. Это удобно, так как контакт сохранится, даже если вышлют отказ в дальнейшем. Мой совет: скрывайте свой номер от работодателя, если рассылаете отклики через утилиту, а то количество жуликов на красном сайте, мягко говоря, зашкаливает. У утилиты есть канал в дуровграме: [HH Applicant Tool](https://t.me/hh_applicant_tool). Старый <s>[HH Resume Automate](https://t.me/hh_resume_automate)</s> был выпилен какими-то долбоёбами, углядевшими во флаге Японии с двумя буквами «h» нарушение авторских прав... Работает с Python >= 3.10. Нужную версию Python можно поставить через asdf/pyenv/conda и что-то еще. В школотронской Manjaro и даже в последних Ubuntu версия Python новее. Данная утилита кроссплатформенна. Она гарантированно работает на Linux, Mac и Windows, в т. ч. WSL. При наличии рутованного телефона можно вытащить `access` и `refresh` токены из официального приложения и добавить их в конфиг. Пример работы: ![image](https://github.com/user-attachments/assets/a0cce1aa-884b-4d84-905a-3bb207eba4a3) > Если в веб-интерфейсе выставить фильтры, то они будут применяться в скрипте при отклике на подходящие > Утилита автоматически подхватывает прокси из переменных окружения типа http_proxy или HTTPS_PROXY --- ## Предыстория Долгое время я делал массовые заявки с помощью консоли браузера: ```js $$('[data-qa="vacancy-serp__vacancy_response"]').forEach((el) => el.click()); ``` Оно работает, хоть и не идеально. Я даже пробовал автоматизировать рассылки через `p[yu]ppeteer`, пока не прочитал [документацию](https://github.com/hhru/api). И не обнаружил, что **API** (интерфейс) содержит все необходимые мне методы. Headhunter позволяет создать свое приложение, но там ручная модерация, и навряд ли кто-то разрешит мне создать приложение для спама заявками. Я [декомпилировал](https://gist.github.com/s3rgeym/eee96bbf91b04f7eb46b7449f8884a00) официальное приложение для **Android** и получил **CLIENT_ID** и **CLIENT_SECRET**, необходимые для работы через **API**. --- ## Запуск через Docker Это рекомендованный способ разработчиком. Так же если не работает стандартная установка, то используйте его. Так же это самый простой способ запуска и использования утилиты, требующий скопипастить 5 команд. Он подойдет обладателям выделенных серверов, используемых под VPN. Единственным недостатком использования `docker` является требовательность его к месту, так как для запуска хромиума, который используется при авторизации, нужно половину убунты поставить (более гига). Для начала нужно установить `docker` и `docker-compose`: ```sh sudo apt install docker.io docker-compose-v2 ``` Выкачиваем репозиторий и переходим в каталог: ```sh git clone https://github.com/s3rgeym/hh-applicant-tool cd hh-applicant-tool ``` > Команды с docker-compose нужно запускать строго, находясь в данном каталоге! Теперь авторизуемся: ```sh docker-compose run -u docker -it hh_applicant_tool \ hh-applicant-tool -vv auth -k ``` Пример вывода: ``` 👤 Введите email или телефон: your-mail@gmail.com 📨 Код был отправлен. Проверьте почту или SMS. 📩 Введите полученный код: 1234 🔓 Авторизация прошла успешно! ``` Капча отобразится только в терминале с поддержкой протокола **kitty**, например, в **Kitty** или **Konsole**. Авторизация с заданными логином и паролем выглядит так: ```sh docker-compose run -u docker -it hh_applicant_tool \ hh-applicant-tool -vv auth -k '<login>' -p '<password>' ``` Подробно про авторизацию можно почитать [здесь](#авторизация). В случае успешной авторизации можно запускать рассылку откликов по крону: ```sh docker-compose up -d ``` Что будет делать? - Рассылать отклики со всех опубликованных резюме. - Поднимать резюме. Просмотр логов `cron`: ```sh docker compose logs -f ``` В выводе должно быть что-то типа: ```sh hh_applicant_tool | [Wed Jan 14 08:33:53 MSK 2026] Running startup tasks... hh_applicant_tool | ℹ️ Токен не истек, обновление не требуется. hh_applicant_tool | ✅ Обновлено Программист ``` Чтобы прекратить просмотр логов, нажмите `Ctrl-C`. Информацию об ошибках можно посмотреть в файле `config/log.txt`, а контакты работодателей — в `config/data` с помощью `sqlite3`. В `config/config.json` хранятся токены, дающие доступ к аккаунту. Запущенные сервисы докер стартуют автоматически после перезагрузки. Остановить их можно выполнив: ```sh docker-compose down ``` Чтобы обновить утилиту в большинству случаев достаточно в каталоге выполнить: ```sh git pull ``` В редких случаях нужно пересобрать все: ```sh docker compose up -d --build ``` Чтобы рассылать отклики с нескольких аккаунтов, нужно переписать `docker-compose.yml`: ```yaml services: # Не меняем ничего тут hh_applicant_tool: # ... # Добавляем новые строки # Просто копипастим, меняя имя сервиса, container_name и значение HH_PROFILE_ID hh_second: extends: hh_applicant_tool container_name: hh_second environment: - HH_PROFILE_ID=second hh_third: extends: hh_applicant_tool container_name: hh_third environment: - HH_PROFILE_ID=third hh_fourth: extends: hh_applicant_tool container_name: hh_fourth environment: - HH_PROFILE_ID=fourth ``` Здесь `HH_PROFILE_ID` — идентификатор профиля (сами придумываете). Далее нужно авторизоваться в каждом профиле: ```sh # Авторизуемся со второго профиля docker-compose exec -u docker -it hh_applicant_tool \ hh-applicant-tool --profile-id second auth -k # Авторизуемся с третьего профиля docker-compose exec -u docker -it hh_applicant_tool \ hh-applicant-tool --profile-id third auth -k # И так далее ``` Ну и выполнить `docker-compose up -d` чтобы запустить новые сервисы. [Команды](#описание-команд) можно потестировать в запущенном контейнере: ```sh $ docker-compose exec -u docker -it hh_applicant_tool bash docker@1897bdd7c80b:/app$ hh-applicant-tool config -p /app/config/config.json docker@1897bdd7c80b:/app$ hh-applicant-tool refresh-token ℹ Токен не истек, обновление не требуется. docker@1897bdd7c80b:/app$ ``` > Обратите внимание, что `docker-compose exec`/`docker-compose run` запускаются с аргументами `-u docker`. Только для пользователя `docker` установлен `chromium`, необходимый для авторизации, а так же это избавляет от проблем с правами, когда созданные файлы для изменения требуют root-права. Если хотите команду `apply-similar` вызывать с какими-то аргументами, то создайте в корне файл `apply-similar.sh`: ```sh #!/bin/bash /usr/local/bin/python -m hh_applicant_tool apply-similar # укажите аргументы ``` В файлах `startup.sh` и `crontab` замените `/usr/local/bin/python -m hh_applicant_tool apply-similar` на `/bin/sh /app/apply-similar.sh`. --- ## Стандартная установка ### Установка утилиты Универсальный способ с использованием pipx (требует пакета `python-pipx` в Arch): ```bash # Полная версия с поддержкой авторизации, включает Node.js и различные утилиты # Обычный пакет без [playwright] можно использовать на сервере, если перенести туда конфиг, и весит # он почти на 500МБ меньше. Думаем (c) s3rgeym. Подписаться $ pipx install 'hh-applicant-tool[playwright]' # Если хочется использовать самую последнюю версию, то можно установить ее через git $ pipx install "git+https://github.com/s3rgeym/hh-applicant-tool[playwright]" # Для обновления до новой версии $ pipx upgrade hh-applicant-tool ``` pipx добавляет исполняемый файл `hh-applicant-tool` в `~/.local/bin`, делая эту команду доступной. Путь до `~/.local/bin` должен быть в `$PATH` (в большинстве дистрибутивов он добавлен). Традиционный способ для Linux/Mac: ```sh mkdir -p ~/.venvs python -m venv ~/.venvs/hh-applicant-tool # Это придется делать постоянно, чтобы команда hh-applicant-tool стала доступна . ~/.venvs/hh-applicant-tool/bin/activate pip install 'hh-applicant-tool[playwright]' ``` Отдельно я распишу процесс установки в **Windows** в подробностях: - Для начала поставьте последнюю версию **Python 3** любым удобным способом. - Запустите **Terminal** или **PowerShell** от Администратора и выполните: ```ps Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy Unrestricted ``` Данная политика разрешает текущему пользователю (от которого зашли) запускать скрипты. Без нее не будут работать виртуальные окружения. Далее можно поставить `pipx` и вернуться к инструкции в верху раздела: - Все так же от администратора выполните: ```ps python -m pip install --user pipx ``` А затем: ```ps python -m pipx ensurepath ``` - Перезапускаем Terminal/Powershell и проверяем: ```ps pipx -h ``` С использованием вирт. окружений: - Создайте и активируйте виртуальное окружение: ```ps PS> python -m venv hh-applicant-venv PS> .\hh-applicant-venv\Scripts\activate ``` - Поставьте все пакеты в виртуальное окружение `hh-applicant-venv`: ```ps (hh-applicant-venv) PS> pip install 'hh-applicant-tool[playwright]' ``` - Проверьте, работает ли оно: ```ps (hh-applicant-venv) PS> hh-applicant-tool -h ``` - В случае неудачи вернитесь к первому шагу. - Для последующих запусков сначала активируйте виртуальное окружение. ### Дополнительные зависимости После вышеописанного нужно установить зависимости в виде Chromium и др: ```sh $ hh-applicant-tool install ``` Этот шаг необязателен. Все это нужно только для авторизации. --- ## Авторизация Прямая авторизация: ```bash $ hh-applicant-tool authorize '<ваш телефон или email>' -p '<пароль>' ``` Если вы пропустили пункт про установку зависимостей, то увидите такую ошибку: ```sh [E] BrowserType.launch: Executable doesn't exist at... ``` Если по какой-то причине не был установлен `playwright`: ```sh [E] name 'async_playwright' is not defined ``` Если не помните пароль или др. причины, то можно авторизоваться с помощью одноразового кода: ```bash $ hh-applicant-tool authorize '<ваш телефон или email>' 📨 Код был отправлен. Проверьте почту или SMS. 📩 Введите полученный код: 1387 🔓 Авторизация прошла успешно! ``` Если же при вводе правильных данных возникает ошибка авторизации, то, скорее всего, требуется ввод капчи. Капчу можно ввести через терминал, если тот поддерживает kitty protocol (например, Kitty, Konsole, Ghostty и др): ```sh hh-applicant-tool authorize --use-kitty ``` <img width="843" height="602" alt="Untitled" src="https://github.com/user-attachments/assets/8f5dec0c-c3d4-4c5c-bd8b-3aeffa623d87" /> Так же поддерживается sixel protocol: `--use-sixel/--sixel/-s`. Ручная авторизация с запуском встроенного браузера: ```sh hh-applicant-tool authorize --manual ``` Проверка авторизации: ```bash $ hh-applicant-tool whoami 🆔 27405918 Кузнецов Андрей Владимирович [ 📄 1 | 👁️ +115 | ✉️ +28 ] ``` В случае успешной авторизации токены будут сохранены в `config.json`. При удачной авторизации логин (почта или телефон) и пароль, если последний был передан, запоминаются и будут подставляться автоматически, если не указать их явно. Токен доступа выдается на две недели. Он обновляется автоматически. Для его ручного обновления нужно выполнить: ```bash $ hh-applicant-tool refresh-token ``` Помните, что у `refresh_token` тоже есть время жизни, поэтому может потребоваться полная авторизация. --- ## Описание команд Примеры команд: ```bash # Общий вид: сначала глобальные настройки, затем команда и её аргументы $ hh-applicant-tool [options] <operation> [args] # Справка по глобальным флагам и список операций $ hh-applicant-tool -h # Справка по операции $ hh-applicant-tool authorize -h # Авторизуемся $ hh-applicant-tool authorize # Авторизация с использованием другого профиля $ hh-applicant-tool --profile profile123 authorize # Рассылаем заявки $ hh-applicant-tool apply-similar # Для тестирования поисковой строки и других параметров, используйте --dry-run. # С ним отклики не отправляются, а лишь выводятся сообщения $ hh-applicant-tool -vv apply-similar --search "Python программист" --per-page 3 --total-pages 1 --dry-run # Поднимаем резюме $ hh-applicant-tool update-resumes # Ответить работодателям $ hh-applicant-tool reply-employers # Просмотр лога в реальном времени $ hh-applicant-tool log -f # Посмотреть содержимое конфига $ hh-applicant-tool config # Редактировать конфиг в стандартном редакторе $ hh-applicant-tool config -e # Вывести значение из конфига $ hh-applicant-tool config -k token.access_token # Установить значение в конфиге, например, socks-прокси $ hh-applicant-tool config -s proxy_url socks5h://localhost:1080 # Удалить значение из конфига $ hh-applicant-tool config -u proxy_url # Утилита все данные об откликах хранит в SQLite $ hh-applicant-tool query 'select count(*) from vacancy_contacts;' +----------+ | count(*) | +----------+ | 42 | +----------+ # Экспорт контактов в csv $ hh-applicant-tool query 'select * from vacancy_contacts' --csv -o contacts.csv # Выполнение запросов в интерактивном режиме $ hh-applicant-tool query # Чистим отказы $ hh-applicant-tool clear-negotiations # При обновлении может сломаться схема БД, для ее починки нужно выполнить # поочерёдно все миграции, добавленные после выхода последней версии $ hh-applicant-tool migrate List of migrations: [1]: 2026-01-07 Choose migration [1] (Keep empty to exit): 1 ✅ Success! # Вывести все настройки $ hh-applicant-tool settings +----------+-------------------------+-------------------------+ | Тип | Ключ | Значение | +----------+-------------------------+-------------------------+ | str | user.email | dmitry.kozlov@yandex.ru | +----------+-------------------------+-------------------------+ # Получить значение по ключу $ hh-applicant-tool settings auth.username # Установить email, используемый для автологина $ hh-applicant-tool settings auth.username 'user@example.com' ``` Глобальные настройки: - `-v` используется для вывода отладочной информации. Два таких флага, например, выводят запросы к **API**. - `-c <path>` путь до каталога, где хранятся конфигурации. - `--profile <profile-id>` можно указать профиль, данные которого будут храниться в подкаталоге. | Операция | Описание | | ---------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **authorize**, **auth** | Авторизация на hh.ru. Введенные логин и пароль "запоминаются" и будут использованы при следующем вызове команды. | | **whoami**, **id** | Выводит информацию об авторизованном пользователе | | **list-resumes**, **list**, **ls** | Список резюме | | **update-resumes**, **update** | Обновить все резюме. Аналогично нажатию кнопки «Обновить дату». | | **apply-similar** | Откликнуться на все подходящие вакансии СО ВСЕХ РЕЗЮМЕ. Лимит = 200 в день. На HH есть спам-фильтры, так что лучше не рассылайте отклики со ссылками, иначе рискуете попасть в теневой бан. | | **reply-employers**, **reply** | Ответить во все чаты с работодателями, где нет ответа либо не прочитали ваш предыдущий ответ | | **clear-negotiations** | Отмена откликов | | **call-api**, **api** | Вызов произвольного метода API с выводом результата. | | **refresh-token**, **refresh** | Обновляет access_token. | | **config** | Показывает содержимое конфига. С флагом -e открывает его для редактирования. | | **settings** | Просмотр и управление настройками в базе данных. Простое key-value хранилище. | | **install** | Устанавливает зависимости, такие как браузер Chromium, необходимые для авторизации. | | **uninstall** | Удаляет браузер Chromium, используемый для авторизации. | | **check-proxy** | Проверяет используемые прокси | | **migrate** | Починить базу | | **query** | Выполнение SQL-запросов к базе. Схема БД находится в файле [schema.sql](./hh_hh_applicant_tool/storage/queries/schema.sql). Если скормить ее [DeepSeek](https://chat.deepseek.com), то он поможет написать любой запрос. | | **log** | Просмотр файла-лога. С флагом -f будет следить за изменениями. В логах частично скрыты идентефикаторы в целях безопасности. | Утилита использует систему плагинов. Все они лежат в [operations](https://github.com/s3rgeym/hh-applicant-tool/tree/main/src/hh_applicant_tool/operations). Модули, расположенные там, автоматически добавляются как доступные команды. За основу для своего плагина можно взять [whoami.py](https://github.com/s3rgeym/hh-applicant-tool/tree/main/src/hh_applicant_tool/operations/whoami.py). Для тестирования запросов к API используйте команду `call-api` совместно с `jq` для обработки JSON. Примеры поиска работодателей: ```bash $ hh-applicant-tool call-api /employers text="IT" only_with_vacancies=true | jq -r '.items[].alternate_url' https://hh.ru/employer/1966364 https://hh.ru/employer/4679771 ... ``` Синтаксис `call-api` немного похож на `httpie` или `curlie`: ```sh $ hh-applicant-tool call-api [-m {GET|POST|PUT|DELETE}] <endpoint> [<key=value> ...] ``` Если используется метод `GET` или `DELETE` (или ничего не указано), то параметры будут переданы как query string. Во всех остальных случаях парметры передаются как `application/x-www-form-urlencoded` в теле запроса. Данная возможность полезна для написания Bash-скриптов. Документация для работы с API соискателей была удалена с ха-ха.сру и его корпоративного репозитория. Можете не искать, они затерли даже историю репозитория. Но я через веб-архив выкачал документацию. Чтобы ее посмотреть, клонируйте этот репозиторий и откройте файл, например, с помощью [Swagger Viewer](https://marketplace.visualstudio.com/items?itemName=Arjun.swagger-viewer). <img width="740" height="768" alt="image" src="https://github.com/user-attachments/assets/597fa31e-8bab-48c8-8601-ab9dfc9075b1" /> Так же существуют cli-утилиты: ```sh npx @redocly/cli preview -d docs/hhapi ``` Потом нужно открыть в браузере [http://localhost:4000](http://localhost:4000). > Отдельные замечания у меня к API HH. Оно пиздец какое кривое. Например, при создании заявки возвращается пустой ответ либо редирект, хотя по логике должен возвращаться созданный объект. Так же в ответах сервера нет `Content-Length`. Из-за этого нельзя узнать, есть ли тело у ответа сервера, нужно его пробовать прочитать. Я так понял, там какой-то прокси оборачивает все запросы и отдает всегда `Transfer-Encoding: Chunked`. А еще он возвращает 502 ошибку, когда бэкенд на Java падает либо долго отвечает (таймаут)? А вот [язык запросов](https://hh.ru/article/1175) мне понравился. Можно что-то вроде этого использовать `NOT (!ID:123 OR !ID:456 OR !ID:789)`, чтобы отсеить какие-то вакансии. По сути, никакие дополнительные команды, кроме имеющихся, не нужны. Вы можете сделать что угодно с помощью `call-api`, но если хочется чего-то особенного, можно добавить свои команды. --- ## Использование AI Для генерации опроводительных писем при откликах и ответа в чаты работодателей (`reply-employers`) можно использовать OpenAI (ChatGPT). Пример рассылки откликов с генерированным письмом: ```sh hh-applicant-tool apply-similar -f --ai ``` Генерацию сопроводительных писем в откликах я счи
text/markdown
Senior YAML Developer
yamldeveloper@proton.me
null
null
null
null
[ "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14" ]
[]
https://github.com/s3rgeym/hh-applicant-tool
null
<4.0,>=3.10
[]
[]
[]
[ "requests[socks]<3.0.0,>=2.32.3", "prettytable<4.0.0,>=3.6.0", "playwright<2.0.0,>=1.57.0; extra == \"playwright\"", "pillow<13.0.0,>=12.1.0; extra == \"pillow\"" ]
[]
[]
[]
[ "Homepage, https://github.com/s3rgeym/hh-applicant-tool", "Repository, https://github.com/s3rgeym/hh-applicant-tool" ]
poetry/2.2.1 CPython/3.11.0 Linux/6.11.0-1018-azure
2026-01-16T04:29:57.488995
hh_applicant_tool-1.5.3-py3-none-any.whl
86,161
42/4f/b865e6e5ac40805c07aa8c993546e673ae3a16e06b5e89fdb4dc99a61ebe/hh_applicant_tool-1.5.3-py3-none-any.whl
py3
bdist_wheel
null
false
aa48696d32dffc7255871cc4d875b69e
061997236dc599cffa40680a2b627a434cc623bf39552d69022e5cb782e96ce1
424fb865e6e5ac40805c07aa8c993546e673ae3a16e06b5e89fdb4dc99a61ebe
null
[]
2.4
hh-applicant-tool
1.5.3
HH-Applicant-Tool: An automation utility for HeadHunter (hh.ru) designed to streamline the job search process by auto-applying to relevant vacancies and periodically refreshing resumes to stay at the top of recruiter searches.
# HH Applicant Tool > Ищу почасовую или проектную [@feedback_s3rgeym_bot](https://t.me/feedback_s3rgeym_bot) (Python, Vue.js, Devops) ![Publish to PyPI](https://github.com/s3rgeym/hh-applicant-tool/actions/workflows/publish.yml/badge.svg) [![PyPi Version](https://img.shields.io/pypi/v/hh-applicant-tool)]() [![Python Versions](https://img.shields.io/pypi/pyversions/hh-applicant-tool.svg)]() [![GitHub code size in bytes](https://img.shields.io/github/languages/code-size/s3rgeym/hh-applicant-tool)]() [![PyPI - Downloads](https://img.shields.io/pypi/dm/hh-applicant-tool)]() [![Total Downloads](https://static.pepy.tech/badge/hh-applicant-tool)]() <div align="center"> <img src="https://github.com/user-attachments/assets/29d91490-2c83-4e3f-a573-c7a6182a4044" width="500"> </div> ### ☕ Поддержать проект [![Donate BTC](https://img.shields.io/badge/Donate-BTC-orange?style=for-the-badge&logo=bitcoin&logoColor=white)](bitcoin:BC1QWQXZX6D5Q0J5QVGH2VYXTFXX9Y6EPPGCW3REHS?label=%D0%94%D0%BB%D1%8F%20%D0%BF%D0%BE%D0%B6%D0%B5%D1%80%D1%82%D0%B2%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B9) **BTC Address:** `BC1QWQXZX6D5Q0J5QVGH2VYXTFXX9Y6EPPGCW3REHS` --- ## ✨ Ключевые преимущества - 💸 **Полностью бесплатно.** В то время как сервисы в интернете или Telegram с аналогичным функционалом просят за свои услуги от 5.000 до 12.000 рублей в месяц. - 🔒 **Безопасность личных данных.** Ваши email, телефон, пароль и другие личные данные никуда не передаются в отличие от сторонних сервисов. В этом можно убедиться, изучив [открытый исходный код](https://github.com/s3rgeym/hh-applicant-tool/tree/main/src/hh_applicant_tool). Владельцы сторонних сервисов никогда вам не покажут исходники. Они знают о вас все и эти данные спокойно продадут каким-нибудь жуликам, либо те утекут в результате взлома. - 💾 **Сохранение контактов и прочей информации.** Контакты работодалей и информация о них и их вакансиях сохраняется в базе данных, что позволяет производить быстрый поиск нужной информации в отличии от сайта при минимальном опыте с SQL (язык запросов, придуманный в свое время для домохозяек). - 🛡️ **Гарантия от блокировок.** Утилита выполняет запросы с вашего устройства, имитируя обычного пользователя. Сервисы рассылают запросы для сотен аккаунтов с одного сервера, что повышает вероятность блокировки вашего аккаунта до 100%. - 😎 **Простота в использовании.** С утилитой разберется любой начинающий пользователь компьютера. Настолько, что херки в своем чате уже жалуются на массовые отклики от подростков 14-17 лет, которые успешно освоили данный инструмент. - 👯 **Мультиаккаунтность и управление резюме.** Утилита благодаря профилям может работать с неограниченным количеством аккаунтов и резюме. - 🖥️ **Полноценный CLI и работа на серверах.** Утилита имеет чистый консольный интерфейс. Несмотря на то, что для обхода защиты при авторизации используется браузер, он работает по умолчанию в фоновом (headless) режиме. Для `hh-applicant-tool` не нужна видеокарта или графическая оболочка (X-сервер), что позволяет авторизоваться даже с сервера или из докер-контейнера. - 🚀 **Скриптинг.** Вы можете использовать утилиту из своих Python-скриптов. - 🤖 **Борьба с ATS и HR.** Россиянские говнокомпании внедрили ATS с нейронками, которые отклоняют отклик в течение 5 секунд. Отказ может прийти даже из-за отсутствия одного ключевого слова в резюме. Это не говоря уже о тупопездом фильтре, отсеивающем по знаку зодика (они не любят козерогов!!!). Это обесценивает ваши усилия на написание сопроводительных писем и чтение бесконечных портянок бреда, сгенерированных нейронками по запросу каких-то дур. Если тупые ичары решили себя не утруждать чтением резюме (они сейчас и сами перестали писать), то и вам незачем читать высеры этих филологинь и психологинь. Утилита избавляет вас от этой рутины, превращающей поиск работы в полноценную работу. Сейчас доля отказов составляет 98-99%, включая "молчунов" и прочих долбоебов, и единственный способ увеличить шансы просто попасть на собеседование — это автоматическая рассылка откликов на все подходящие вакансии. У большинства телефоны с двумя симками, а значит каждый может разослать до 400 откликов в сутки, а если нарегать акков на родню — еще больше! --- ## Содержание - [HH Applicant Tool](#hh-applicant-tool) - [☕ Поддержать проект](#-поддержать-проект) - [✨ Ключевые преимущества](#-ключевые-преимущества) - [Содержание](#содержание) - [Описание](#описание) - [Предыстория](#предыстория) - [Запуск через Docker](#запуск-через-docker) - [Стандартная установка](#стандартная-установка) - [Установка утилиты](#установка-утилиты) - [Дополнительные зависимости](#дополнительные-зависимости) - [Авторизация](#авторизация) - [Описание команд](#описание-команд) - [Использование AI](#использование-ai) - [OpenAI/ChatGPT](#openaichatgpt) - [Шаблоны сообщений](#шаблоны-сообщений) - [Данные приложения](#данные-приложения) - [Конфигурационный файл](#конфигурационный-файл) - [Логи](#логи) - [База данных](#база-данных) - [Использование в скриптах](#использование-в-скриптах) - [Дополнительные настройки](#дополнительные-настройки) - [Лицензионное соглашение (Limited Non-Commercial License)](#лицензионное-соглашение-limited-non-commercial-license) --- ## Описание > Данной утилите похуй на "запрет" доступа к API HH сторонним приложениям, так как она прикидывается официальным приложением под Android > Утилита для генерации сопроводительного письма может использовать AI, в т. ч. ChatGPT. Подробное описание ниже Утилита для успешных волчат и старых волков с опытом, служащая для автоматизации действий на HH.RU, таких как рассылка откликов на подходящие вакансии и обновление всех резюме (бесплатный аналог услуги на HH). Утилита локально хранит информацию об откликах, в т. ч. полученные контакты. Это удобно, так как контакт сохранится, даже если вышлют отказ в дальнейшем. Мой совет: скрывайте свой номер от работодателя, если рассылаете отклики через утилиту, а то количество жуликов на красном сайте, мягко говоря, зашкаливает. У утилиты есть канал в дуровграме: [HH Applicant Tool](https://t.me/hh_applicant_tool). Старый <s>[HH Resume Automate](https://t.me/hh_resume_automate)</s> был выпилен какими-то долбоёбами, углядевшими во флаге Японии с двумя буквами «h» нарушение авторских прав... Работает с Python >= 3.10. Нужную версию Python можно поставить через asdf/pyenv/conda и что-то еще. В школотронской Manjaro и даже в последних Ubuntu версия Python новее. Данная утилита кроссплатформенна. Она гарантированно работает на Linux, Mac и Windows, в т. ч. WSL. При наличии рутованного телефона можно вытащить `access` и `refresh` токены из официального приложения и добавить их в конфиг. Пример работы: ![image](https://github.com/user-attachments/assets/a0cce1aa-884b-4d84-905a-3bb207eba4a3) > Если в веб-интерфейсе выставить фильтры, то они будут применяться в скрипте при отклике на подходящие > Утилита автоматически подхватывает прокси из переменных окружения типа http_proxy или HTTPS_PROXY --- ## Предыстория Долгое время я делал массовые заявки с помощью консоли браузера: ```js $$('[data-qa="vacancy-serp__vacancy_response"]').forEach((el) => el.click()); ``` Оно работает, хоть и не идеально. Я даже пробовал автоматизировать рассылки через `p[yu]ppeteer`, пока не прочитал [документацию](https://github.com/hhru/api). И не обнаружил, что **API** (интерфейс) содержит все необходимые мне методы. Headhunter позволяет создать свое приложение, но там ручная модерация, и навряд ли кто-то разрешит мне создать приложение для спама заявками. Я [декомпилировал](https://gist.github.com/s3rgeym/eee96bbf91b04f7eb46b7449f8884a00) официальное приложение для **Android** и получил **CLIENT_ID** и **CLIENT_SECRET**, необходимые для работы через **API**. --- ## Запуск через Docker Это рекомендованный способ разработчиком. Так же если не работает стандартная установка, то используйте его. Так же это самый простой способ запуска и использования утилиты, требующий скопипастить 5 команд. Он подойдет обладателям выделенных серверов, используемых под VPN. Единственным недостатком использования `docker` является требовательность его к месту, так как для запуска хромиума, который используется при авторизации, нужно половину убунты поставить (более гига). Для начала нужно установить `docker` и `docker-compose`: ```sh sudo apt install docker.io docker-compose-v2 ``` Выкачиваем репозиторий и переходим в каталог: ```sh git clone https://github.com/s3rgeym/hh-applicant-tool cd hh-applicant-tool ``` > Команды с docker-compose нужно запускать строго, находясь в данном каталоге! Теперь авторизуемся: ```sh docker-compose run -u docker -it hh_applicant_tool \ hh-applicant-tool -vv auth -k ``` Пример вывода: ``` 👤 Введите email или телефон: your-mail@gmail.com 📨 Код был отправлен. Проверьте почту или SMS. 📩 Введите полученный код: 1234 🔓 Авторизация прошла успешно! ``` Капча отобразится только в терминале с поддержкой протокола **kitty**, например, в **Kitty** или **Konsole**. Авторизация с заданными логином и паролем выглядит так: ```sh docker-compose run -u docker -it hh_applicant_tool \ hh-applicant-tool -vv auth -k '<login>' -p '<password>' ``` Подробно про авторизацию можно почитать [здесь](#авторизация). В случае успешной авторизации можно запускать рассылку откликов по крону: ```sh docker-compose up -d ``` Что будет делать? - Рассылать отклики со всех опубликованных резюме. - Поднимать резюме. Просмотр логов `cron`: ```sh docker compose logs -f ``` В выводе должно быть что-то типа: ```sh hh_applicant_tool | [Wed Jan 14 08:33:53 MSK 2026] Running startup tasks... hh_applicant_tool | ℹ️ Токен не истек, обновление не требуется. hh_applicant_tool | ✅ Обновлено Программист ``` Чтобы прекратить просмотр логов, нажмите `Ctrl-C`. Информацию об ошибках можно посмотреть в файле `config/log.txt`, а контакты работодателей — в `config/data` с помощью `sqlite3`. В `config/config.json` хранятся токены, дающие доступ к аккаунту. Запущенные сервисы докер стартуют автоматически после перезагрузки. Остановить их можно выполнив: ```sh docker-compose down ``` Чтобы обновить утилиту в большинству случаев достаточно в каталоге выполнить: ```sh git pull ``` В редких случаях нужно пересобрать все: ```sh docker compose up -d --build ``` Чтобы рассылать отклики с нескольких аккаунтов, нужно переписать `docker-compose.yml`: ```yaml services: # Не меняем ничего тут hh_applicant_tool: # ... # Добавляем новые строки # Просто копипастим, меняя имя сервиса, container_name и значение HH_PROFILE_ID hh_second: extends: hh_applicant_tool container_name: hh_second environment: - HH_PROFILE_ID=second hh_third: extends: hh_applicant_tool container_name: hh_third environment: - HH_PROFILE_ID=third hh_fourth: extends: hh_applicant_tool container_name: hh_fourth environment: - HH_PROFILE_ID=fourth ``` Здесь `HH_PROFILE_ID` — идентификатор профиля (сами придумываете). Далее нужно авторизоваться в каждом профиле: ```sh # Авторизуемся со второго профиля docker-compose exec -u docker -it hh_applicant_tool \ hh-applicant-tool --profile-id second auth -k # Авторизуемся с третьего профиля docker-compose exec -u docker -it hh_applicant_tool \ hh-applicant-tool --profile-id third auth -k # И так далее ``` Ну и выполнить `docker-compose up -d` чтобы запустить новые сервисы. [Команды](#описание-команд) можно потестировать в запущенном контейнере: ```sh $ docker-compose exec -u docker -it hh_applicant_tool bash docker@1897bdd7c80b:/app$ hh-applicant-tool config -p /app/config/config.json docker@1897bdd7c80b:/app$ hh-applicant-tool refresh-token ℹ Токен не истек, обновление не требуется. docker@1897bdd7c80b:/app$ ``` > Обратите внимание, что `docker-compose exec`/`docker-compose run` запускаются с аргументами `-u docker`. Только для пользователя `docker` установлен `chromium`, необходимый для авторизации, а так же это избавляет от проблем с правами, когда созданные файлы для изменения требуют root-права. Если хотите команду `apply-similar` вызывать с какими-то аргументами, то создайте в корне файл `apply-similar.sh`: ```sh #!/bin/bash /usr/local/bin/python -m hh_applicant_tool apply-similar # укажите аргументы ``` В файлах `startup.sh` и `crontab` замените `/usr/local/bin/python -m hh_applicant_tool apply-similar` на `/bin/sh /app/apply-similar.sh`. --- ## Стандартная установка ### Установка утилиты Универсальный способ с использованием pipx (требует пакета `python-pipx` в Arch): ```bash # Полная версия с поддержкой авторизации, включает Node.js и различные утилиты # Обычный пакет без [playwright] можно использовать на сервере, если перенести туда конфиг, и весит # он почти на 500МБ меньше. Думаем (c) s3rgeym. Подписаться $ pipx install 'hh-applicant-tool[playwright]' # Если хочется использовать самую последнюю версию, то можно установить ее через git $ pipx install "git+https://github.com/s3rgeym/hh-applicant-tool[playwright]" # Для обновления до новой версии $ pipx upgrade hh-applicant-tool ``` pipx добавляет исполняемый файл `hh-applicant-tool` в `~/.local/bin`, делая эту команду доступной. Путь до `~/.local/bin` должен быть в `$PATH` (в большинстве дистрибутивов он добавлен). Традиционный способ для Linux/Mac: ```sh mkdir -p ~/.venvs python -m venv ~/.venvs/hh-applicant-tool # Это придется делать постоянно, чтобы команда hh-applicant-tool стала доступна . ~/.venvs/hh-applicant-tool/bin/activate pip install 'hh-applicant-tool[playwright]' ``` Отдельно я распишу процесс установки в **Windows** в подробностях: - Для начала поставьте последнюю версию **Python 3** любым удобным способом. - Запустите **Terminal** или **PowerShell** от Администратора и выполните: ```ps Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy Unrestricted ``` Данная политика разрешает текущему пользователю (от которого зашли) запускать скрипты. Без нее не будут работать виртуальные окружения. Далее можно поставить `pipx` и вернуться к инструкции в верху раздела: - Все так же от администратора выполните: ```ps python -m pip install --user pipx ``` А затем: ```ps python -m pipx ensurepath ``` - Перезапускаем Terminal/Powershell и проверяем: ```ps pipx -h ``` С использованием вирт. окружений: - Создайте и активируйте виртуальное окружение: ```ps PS> python -m venv hh-applicant-venv PS> .\hh-applicant-venv\Scripts\activate ``` - Поставьте все пакеты в виртуальное окружение `hh-applicant-venv`: ```ps (hh-applicant-venv) PS> pip install 'hh-applicant-tool[playwright]' ``` - Проверьте, работает ли оно: ```ps (hh-applicant-venv) PS> hh-applicant-tool -h ``` - В случае неудачи вернитесь к первому шагу. - Для последующих запусков сначала активируйте виртуальное окружение. ### Дополнительные зависимости После вышеописанного нужно установить зависимости в виде Chromium и др: ```sh $ hh-applicant-tool install ``` Этот шаг необязателен. Все это нужно только для авторизации. --- ## Авторизация Прямая авторизация: ```bash $ hh-applicant-tool authorize '<ваш телефон или email>' -p '<пароль>' ``` Если вы пропустили пункт про установку зависимостей, то увидите такую ошибку: ```sh [E] BrowserType.launch: Executable doesn't exist at... ``` Если по какой-то причине не был установлен `playwright`: ```sh [E] name 'async_playwright' is not defined ``` Если не помните пароль или др. причины, то можно авторизоваться с помощью одноразового кода: ```bash $ hh-applicant-tool authorize '<ваш телефон или email>' 📨 Код был отправлен. Проверьте почту или SMS. 📩 Введите полученный код: 1387 🔓 Авторизация прошла успешно! ``` Если же при вводе правильных данных возникает ошибка авторизации, то, скорее всего, требуется ввод капчи. Капчу можно ввести через терминал, если тот поддерживает kitty protocol (например, Kitty, Konsole, Ghostty и др): ```sh hh-applicant-tool authorize --use-kitty ``` <img width="843" height="602" alt="Untitled" src="https://github.com/user-attachments/assets/8f5dec0c-c3d4-4c5c-bd8b-3aeffa623d87" /> Так же поддерживается sixel protocol: `--use-sixel/--sixel/-s`. Ручная авторизация с запуском встроенного браузера: ```sh hh-applicant-tool authorize --manual ``` Проверка авторизации: ```bash $ hh-applicant-tool whoami 🆔 27405918 Кузнецов Андрей Владимирович [ 📄 1 | 👁️ +115 | ✉️ +28 ] ``` В случае успешной авторизации токены будут сохранены в `config.json`. При удачной авторизации логин (почта или телефон) и пароль, если последний был передан, запоминаются и будут подставляться автоматически, если не указать их явно. Токен доступа выдается на две недели. Он обновляется автоматически. Для его ручного обновления нужно выполнить: ```bash $ hh-applicant-tool refresh-token ``` Помните, что у `refresh_token` тоже есть время жизни, поэтому может потребоваться полная авторизация. --- ## Описание команд Примеры команд: ```bash # Общий вид: сначала глобальные настройки, затем команда и её аргументы $ hh-applicant-tool [options] <operation> [args] # Справка по глобальным флагам и список операций $ hh-applicant-tool -h # Справка по операции $ hh-applicant-tool authorize -h # Авторизуемся $ hh-applicant-tool authorize # Авторизация с использованием другого профиля $ hh-applicant-tool --profile profile123 authorize # Рассылаем заявки $ hh-applicant-tool apply-similar # Для тестирования поисковой строки и других параметров, используйте --dry-run. # С ним отклики не отправляются, а лишь выводятся сообщения $ hh-applicant-tool -vv apply-similar --search "Python программист" --per-page 3 --total-pages 1 --dry-run # Поднимаем резюме $ hh-applicant-tool update-resumes # Ответить работодателям $ hh-applicant-tool reply-employers # Просмотр лога в реальном времени $ hh-applicant-tool log -f # Посмотреть содержимое конфига $ hh-applicant-tool config # Редактировать конфиг в стандартном редакторе $ hh-applicant-tool config -e # Вывести значение из конфига $ hh-applicant-tool config -k token.access_token # Установить значение в конфиге, например, socks-прокси $ hh-applicant-tool config -s proxy_url socks5h://localhost:1080 # Удалить значение из конфига $ hh-applicant-tool config -u proxy_url # Утилита все данные об откликах хранит в SQLite $ hh-applicant-tool query 'select count(*) from vacancy_contacts;' +----------+ | count(*) | +----------+ | 42 | +----------+ # Экспорт контактов в csv $ hh-applicant-tool query 'select * from vacancy_contacts' --csv -o contacts.csv # Выполнение запросов в интерактивном режиме $ hh-applicant-tool query # Чистим отказы $ hh-applicant-tool clear-negotiations # При обновлении может сломаться схема БД, для ее починки нужно выполнить # поочерёдно все миграции, добавленные после выхода последней версии $ hh-applicant-tool migrate List of migrations: [1]: 2026-01-07 Choose migration [1] (Keep empty to exit): 1 ✅ Success! # Вывести все настройки $ hh-applicant-tool settings +----------+-------------------------+-------------------------+ | Тип | Ключ | Значение | +----------+-------------------------+-------------------------+ | str | user.email | dmitry.kozlov@yandex.ru | +----------+-------------------------+-------------------------+ # Получить значение по ключу $ hh-applicant-tool settings auth.username # Установить email, используемый для автологина $ hh-applicant-tool settings auth.username 'user@example.com' ``` Глобальные настройки: - `-v` используется для вывода отладочной информации. Два таких флага, например, выводят запросы к **API**. - `-c <path>` путь до каталога, где хранятся конфигурации. - `--profile <profile-id>` можно указать профиль, данные которого будут храниться в подкаталоге. | Операция | Описание | | ---------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **authorize**, **auth** | Авторизация на hh.ru. Введенные логин и пароль "запоминаются" и будут использованы при следующем вызове команды. | | **whoami**, **id** | Выводит информацию об авторизованном пользователе | | **list-resumes**, **list**, **ls** | Список резюме | | **update-resumes**, **update** | Обновить все резюме. Аналогично нажатию кнопки «Обновить дату». | | **apply-similar** | Откликнуться на все подходящие вакансии СО ВСЕХ РЕЗЮМЕ. Лимит = 200 в день. На HH есть спам-фильтры, так что лучше не рассылайте отклики со ссылками, иначе рискуете попасть в теневой бан. | | **reply-employers**, **reply** | Ответить во все чаты с работодателями, где нет ответа либо не прочитали ваш предыдущий ответ | | **clear-negotiations** | Отмена откликов | | **call-api**, **api** | Вызов произвольного метода API с выводом результата. | | **refresh-token**, **refresh** | Обновляет access_token. | | **config** | Показывает содержимое конфига. С флагом -e открывает его для редактирования. | | **settings** | Просмотр и управление настройками в базе данных. Простое key-value хранилище. | | **install** | Устанавливает зависимости, такие как браузер Chromium, необходимые для авторизации. | | **uninstall** | Удаляет браузер Chromium, используемый для авторизации. | | **check-proxy** | Проверяет используемые прокси | | **migrate** | Починить базу | | **query** | Выполнение SQL-запросов к базе. Схема БД находится в файле [schema.sql](./hh_hh_applicant_tool/storage/queries/schema.sql). Если скормить ее [DeepSeek](https://chat.deepseek.com), то он поможет написать любой запрос. | | **log** | Просмотр файла-лога. С флагом -f будет следить за изменениями. В логах частично скрыты идентефикаторы в целях безопасности. | Утилита использует систему плагинов. Все они лежат в [operations](https://github.com/s3rgeym/hh-applicant-tool/tree/main/src/hh_applicant_tool/operations). Модули, расположенные там, автоматически добавляются как доступные команды. За основу для своего плагина можно взять [whoami.py](https://github.com/s3rgeym/hh-applicant-tool/tree/main/src/hh_applicant_tool/operations/whoami.py). Для тестирования запросов к API используйте команду `call-api` совместно с `jq` для обработки JSON. Примеры поиска работодателей: ```bash $ hh-applicant-tool call-api /employers text="IT" only_with_vacancies=true | jq -r '.items[].alternate_url' https://hh.ru/employer/1966364 https://hh.ru/employer/4679771 ... ``` Синтаксис `call-api` немного похож на `httpie` или `curlie`: ```sh $ hh-applicant-tool call-api [-m {GET|POST|PUT|DELETE}] <endpoint> [<key=value> ...] ``` Если используется метод `GET` или `DELETE` (или ничего не указано), то параметры будут переданы как query string. Во всех остальных случаях парметры передаются как `application/x-www-form-urlencoded` в теле запроса. Данная возможность полезна для написания Bash-скриптов. Документация для работы с API соискателей была удалена с ха-ха.сру и его корпоративного репозитория. Можете не искать, они затерли даже историю репозитория. Но я через веб-архив выкачал документацию. Чтобы ее посмотреть, клонируйте этот репозиторий и откройте файл, например, с помощью [Swagger Viewer](https://marketplace.visualstudio.com/items?itemName=Arjun.swagger-viewer). <img width="740" height="768" alt="image" src="https://github.com/user-attachments/assets/597fa31e-8bab-48c8-8601-ab9dfc9075b1" /> Так же существуют cli-утилиты: ```sh npx @redocly/cli preview -d docs/hhapi ``` Потом нужно открыть в браузере [http://localhost:4000](http://localhost:4000). > Отдельные замечания у меня к API HH. Оно пиздец какое кривое. Например, при создании заявки возвращается пустой ответ либо редирект, хотя по логике должен возвращаться созданный объект. Так же в ответах сервера нет `Content-Length`. Из-за этого нельзя узнать, есть ли тело у ответа сервера, нужно его пробовать прочитать. Я так понял, там какой-то прокси оборачивает все запросы и отдает всегда `Transfer-Encoding: Chunked`. А еще он возвращает 502 ошибку, когда бэкенд на Java падает либо долго отвечает (таймаут)? А вот [язык запросов](https://hh.ru/article/1175) мне понравился. Можно что-то вроде этого использовать `NOT (!ID:123 OR !ID:456 OR !ID:789)`, чтобы отсеить какие-то вакансии. По сути, никакие дополнительные команды, кроме имеющихся, не нужны. Вы можете сделать что угодно с помощью `call-api`, но если хочется чего-то особенного, можно добавить свои команды. --- ## Использование AI Для генерации опроводительных писем при откликах и ответа в чаты работодателей (`reply-employers`) можно использовать OpenAI (ChatGPT). Пример рассылки откликов с генерированным письмом: ```sh hh-applicant-tool apply-similar -f --ai ``` Генерацию сопроводительных писем в откликах я счи
text/markdown
Senior YAML Developer
yamldeveloper@proton.me
null
null
null
null
[ "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14" ]
[]
https://github.com/s3rgeym/hh-applicant-tool
null
<4.0,>=3.10
[]
[]
[]
[ "requests[socks]<3.0.0,>=2.32.3", "prettytable<4.0.0,>=3.6.0", "playwright<2.0.0,>=1.57.0; extra == \"playwright\"", "pillow<13.0.0,>=12.1.0; extra == \"pillow\"" ]
[]
[]
[]
[ "Homepage, https://github.com/s3rgeym/hh-applicant-tool", "Repository, https://github.com/s3rgeym/hh-applicant-tool" ]
poetry/2.2.1 CPython/3.11.0 Linux/6.11.0-1018-azure
2026-01-16T04:29:59.323123
hh_applicant_tool-1.5.3.tar.gz
75,232
5d/64/2eca3c70f869a466a73ff2d09885b56ca0269ae8412331c690d5f55097bf/hh_applicant_tool-1.5.3.tar.gz
source
sdist
null
false
d59d4bdbcbe5c878b7f341ff7b02e02e
618bd8b22b0be69613a32d000803508a4c68a9249554de3e591aa28067efbcfb
5d642eca3c70f869a466a73ff2d09885b56ca0269ae8412331c690d5f55097bf
null
[]
2.4
kinemotion
0.76.2
Video-based kinematic analysis for athletic performance
# Kinemotion [![PyPI version](https://img.shields.io/pypi/v/kinemotion.svg)](https://pypi.org/project/kinemotion/) [![Python Version](https://img.shields.io/pypi/pyversions/kinemotion.svg)](https://pypi.org/project/kinemotion/) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Tests](https://github.com/feniix/kinemotion/workflows/Test%20%26%20Quality/badge.svg)](https://github.com/feniix/kinemotion/actions) [![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=feniix_kinemotion&metric=alert_status)](https://sonarcloud.io/summary/overall?id=feniix_kinemotion) [![Coverage](https://sonarcloud.io/api/project_badges/measure?project=feniix_kinemotion&metric=coverage)](https://sonarcloud.io/summary/overall?id=feniix_kinemotion) [![OpenSSF Best Practices](https://www.bestpractices.dev/projects/11506/badge)](https://www.bestpractices.dev/projects/11506) [![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff) [![Type checked with pyright](https://img.shields.io/badge/type%20checked-pyright-blue.svg)](https://github.com/microsoft/pyright) [![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit)](https://github.com/pre-commit/pre-commit) > A video-based kinematic analysis tool for athletic performance. Analyzes vertical jump videos to estimate key performance metrics using MediaPipe pose tracking and advanced kinematics. **Supported jump types:** - **Drop Jump**: Ground contact time, flight time, reactive strength index - **Counter Movement Jump (CMJ)**: Jump height, flight time, countermovement depth, triple extension biomechanics - **Squat Jump (SJ)**: Pure concentric power, force production, requires athlete mass ## Features ### Core Features - **Automatic pose tracking** using MediaPipe Pose landmarks - **Derivative-based velocity** - smooth velocity calculation from position trajectory - **Trajectory curvature analysis** - acceleration patterns for refined event detection - **Sub-frame interpolation** - precise timing beyond frame boundaries - **Intelligent auto-tuning** - automatic parameter optimization based on video characteristics - **JSON output** for easy integration with other tools - **Debug video overlays** with visual analysis - **Batch processing** - CLI and Python API for parallel processing - **Python library API** - use kinemotion programmatically - **CSV export** - aggregated results for research ### Drop Jump Analysis - **Ground contact detection** based on foot velocity and position - **Automatic drop jump detection** - identifies box → drop → landing → jump phases - **Metrics**: Ground contact time, flight time, jump height (calculated from flight time) - **Reactive strength index** calculations ### Counter Movement Jump (CMJ) Analysis - **Backward search algorithm** - robust phase detection from peak height - **Flight time method** - force plate standard (h = g×t²/8) - **Triple extension tracking** - ankle, knee, hip joint angles - **Skeleton overlay** - biomechanical visualization - **Metrics**: Jump height, flight time, countermovement depth, eccentric/concentric durations - **Validated accuracy**: 50.6cm jump (±1 frame precision) ### Squat Jump (SJ) Analysis - **Static squat start** - pure concentric power test (no countermovement) - **Power/Force calculations** - Sayers regression (R² = 0.87, \<1% error vs force plates) - **Mass required** - athlete body weight needed for kinetic calculations - **Metrics**: Jump height, flight time, squat hold/concentric durations, peak/mean power, peak force - **Phase detection**: Squat hold → concentric → flight → landing ## ⚠️ Validation Status **Current Status:** Pre-validation (not validated against force plates or motion capture systems) ### What This Tool IS Suitable For ✅ **Training monitoring** - Track relative changes within the same athlete over time ✅ **Educational purposes** - Learn about jump biomechanics and video analysis ✅ **Exploratory analysis** - Initial investigation before formal testing ✅ **Proof-of-concept research** - Demonstrate feasibility of video-based methods ### What This Tool IS NOT Suitable For ❌ **Research publications** - As a validated measurement instrument ❌ **Clinical decision-making** - Injury assessment, return-to-play decisions ❌ **Talent identification** - Absolute performance comparisons between athletes ❌ **Legal/insurance assessments** - Any context requiring validated measurements ❌ **High-stakes testing** - Draft combines, professional athlete evaluation ### Known Limitations - **No force plate validation** - Accuracy claims are theoretical, not empirical - **MediaPipe constraints** - Accuracy affected by lighting, clothing, occlusion, camera quality - **Lower sampling rate** - Typical video (30-60fps) vs validated apps (120-240Hz) - **Indirect measurement** - Landmarks → CoM estimation introduces potential error - **No correction factors** - Unlike validated tools (e.g., MyJump), no systematic bias corrections applied ### Recommended Use If you need validated measurements for research or clinical use, consider: - **Commercial validated apps**: MyJump 2, MyJumpLab (smartphone-based, force plate validated) - **Laboratory equipment**: Force plates, optical motion capture systems - **Validation testing**: Compare kinemotion against validated equipment in your specific use case For detailed validation status and roadmap, see [`docs/validation-status.md`](docs/validation-status.md). ## Setup ### System Requirements **All Platforms:** - Python 3.10, 3.11, or 3.12 **Platform-Specific:** #### Windows **Required system dependencies:** - [Microsoft Visual C++ 2022 Redistributable](https://visualstudio.microsoft.com/visual-cpp-build-tools/) - Runtime libraries for OpenCV/MediaPipe - Python 3.10-3.12 (64-bit) - MediaPipe requires 64-bit Python **Recommended for mobile video support:** - [FFmpeg](https://ffmpeg.org/download.html) - Download and add to PATH for full video codec support #### macOS **Required system dependencies:** - Xcode Command Line Tools - Provides compilers and system frameworks ```bash xcode-select --install ``` **Recommended for mobile video support:** ```bash brew install ffmpeg ``` #### Linux (Ubuntu/Debian) **Recommended system libraries:** ```bash sudo apt-get update sudo apt-get install -y \ libgl1 \ # OpenGL library for OpenCV libglib2.0-0 \ # GLib library for MediaPipe libgomp1 \ # OpenMP library for multi-threading ffmpeg # Video codec support and metadata extraction ``` **Note:** `ffmpeg` provides the `ffprobe` tool for video metadata extraction (rotation, aspect ratio). Kinemotion works without it, but mobile/rotated videos may not process correctly. A warning will be shown if `ffprobe` is not available. ### Installation Methods #### From PyPI (Recommended) ```bash pip install kinemotion ``` #### From Source (Development) **Step 1:** Install asdf plugins (if not already installed): ```bash asdf plugin add python asdf plugin add uv ``` **Step 2:** Install versions specified in `.tool-versions`: ```bash asdf install ``` **Step 3:** Install project dependencies using uv: ```bash uv sync ``` This will install all dependencies and make the `kinemotion` command available. ## Usage Kinemotion supports two jump types with intelligent auto-tuning that automatically optimizes parameters based on video characteristics. ### Analyzing Drop Jumps Analyzes reactive strength and ground contact time: ```bash # Automatic parameter tuning based on video characteristics kinemotion dropjump-analyze video.mp4 ``` ### Analyzing CMJ Analyzes jump height and biomechanics: ```bash # No drop height needed (floor level) kinemotion cmj-analyze video.mp4 # With triple extension visualization kinemotion cmj-analyze video.mp4 --output debug.mp4 ``` ### Analyzing Squat Jump (SJ) Analyzes pure concentric power production: ```bash # Mass is required for power/force calculations kinemotion sj-analyze video.mp4 --mass 75.0 # Complete analysis with all outputs kinemotion sj-analyze video.mp4 --mass 75.0 \ --output debug.mp4 \ --json-output results.json \ --verbose ``` ### Common Options (All Jump Types) ```bash # Save metrics to JSON kinemotion cmj-analyze video.mp4 --json-output results.json # Generate debug video kinemotion cmj-analyze video.mp4 --output debug.mp4 # Complete analysis with all outputs kinemotion cmj-analyze video.mp4 \ --output debug.mp4 \ --json-output results.json \ --verbose ``` ### Quality Presets ```bash # Fast (50% faster, good for batch) kinemotion cmj-analyze video.mp4 --quality fast # Balanced (default) kinemotion cmj-analyze video.mp4 --quality balanced # Accurate (research-grade) kinemotion cmj-analyze video.mp4 --quality accurate --verbose ``` ### Batch Processing Process multiple videos in parallel: ```bash # Drop jumps kinemotion dropjump-analyze videos/*.mp4 --batch --workers 4 # CMJ with output directories kinemotion cmj-analyze videos/*.mp4 --batch --workers 4 \ --json-output-dir results/ \ --csv-summary summary.csv ``` ### Quality Assessment All analysis outputs include automatic quality assessment in the metadata section to help you know when to trust results: ```json { "data": { "jump_height_m": 0.352, "flight_time_ms": 534.2 }, "metadata": { "quality": { "confidence": "high", "quality_score": 87.3, "quality_indicators": { "avg_visibility": 0.89, "min_visibility": 0.82, "tracking_stable": true, "phase_detection_clear": true, "outliers_detected": 2, "outlier_percentage": 1.5, "position_variance": 0.0008, "fps": 60.0 }, "warnings": [] } } } ``` **Confidence Levels:** - **High** (score ≥75): Trust these results, good tracking quality - **Medium** (score 50-74): Use with caution, check quality indicators - **Low** (score \<50): Results may be unreliable, review warnings **Common Warnings:** - Poor lighting or occlusion detected - Unstable landmark tracking (jitter) - High outlier rate (tracking glitches) - Low frame rate (\<30fps) - Unclear phase transitions **Filtering by Quality:** ```python # Only use high-confidence results metrics = process_cmj_video("video.mp4") if metrics.quality_assessment is not None and metrics.quality_assessment.confidence == "high": print(f"Reliable jump height: {metrics.jump_height:.3f}m") elif metrics.quality_assessment is not None: print(f"Low quality - warnings: {metrics.quality_assessment.warnings}") ``` ## Python API Use kinemotion as a library for automated pipelines and custom analysis. ### Drop Jump API ```python from kinemotion import process_dropjump_video # Process a single video metrics = process_dropjump_video( video_path="athlete_jump.mp4", quality="balanced", verbose=True ) # Access results print(f"Jump height: {metrics.jump_height:.3f} m") print(f"Ground contact time: {metrics.ground_contact_time * 1000:.1f} ms") print(f"Flight time: {metrics.flight_time * 1000:.1f} ms") ``` ### Bulk Video Processing ```python # Drop jump bulk processing from kinemotion import DropJumpVideoConfig, process_dropjump_videos_bulk configs = [ DropJumpVideoConfig("video1.mp4", quality="balanced"), DropJumpVideoConfig("video2.mp4", quality="accurate"), ] results = process_dropjump_videos_bulk(configs, max_workers=4) # CMJ bulk processing from kinemotion import CMJVideoConfig, process_cmj_videos_bulk cmj_configs = [ CMJVideoConfig("cmj1.mp4"), CMJVideoConfig("cmj2.mp4", quality="accurate"), ] cmj_results = process_cmj_videos_bulk(cmj_configs, max_workers=4) for result in cmj_results: if result.success: print(f"{result.video_path}: {result.metrics.jump_height*100:.1f}cm") ``` See `examples/bulk/README.md` for comprehensive API documentation. ### CMJ-Specific Features ```python # Triple extension angles available in metrics metrics = process_cmj_video("video.mp4", output_video="debug.mp4") # Debug video shows: # - Skeleton overlay (foot→shin→femur→trunk) # - Joint angles (ankle, knee, hip, trunk) # - Phase-coded visualization ``` ### Squat Jump (SJ) API ```python from kinemotion import process_sj_video # Mass is required for power/force calculations metrics = process_sj_video( video_path="athlete_sj.mp4", mass_kg=75.0, # Required: athlete body mass quality="balanced", verbose=True ) # Access results print(f"Jump height: {metrics.jump_height:.3f}m") print(f"Squat hold: {metrics.squat_hold_duration*1000:.1f}ms") print(f"Concentric: {metrics.concentric_duration*1000:.1f}ms") # Power/force (only available if mass provided) if metrics.peak_power: print(f"Peak power: {metrics.peak_power:.0f}W") print(f"Mean power: {metrics.mean_power:.0f}W") print(f"Peak force: {metrics.peak_force:.0f}N") ``` ### CSV Export Example ```python # See examples/bulk/ for complete CSV export examples from kinemotion import process_cmj_video import csv # ... process videos ... with open("results.csv", "w", newline="") as f: writer = csv.writer(f) writer.writerow(["Video", "GCT (ms)", "Flight (ms)", "Jump (m)"]) for r in results: if r.success and r.metrics: writer.writerow([ Path(r.video_path).name, f"{r.metrics.ground_contact_time * 1000:.1f}" if r.metrics.ground_contact_time else "N/A", f"{r.metrics.flight_time * 1000:.1f}" if r.metrics.flight_time else "N/A", f"{r.metrics.jump_height:.3f}" if r.metrics.jump_height else "N/A", ]) ``` **See [examples/bulk/README.md](examples/bulk/README.md) for comprehensive API documentation and more examples.** ## Configuration Options ### Intelligent Auto-Tuning Kinemotion automatically optimizes parameters based on your video: - **FPS-based scaling**: 30fps, 60fps, 120fps videos use different thresholds automatically - **Quality-based adjustments**: Adapts smoothing based on MediaPipe tracking confidence - **Always enabled**: Outlier rejection, curvature analysis, drop start detection ### Parameters All parameters are optional. Kinemotion uses intelligent auto-tuning to select optimal settings based on video characteristics. - `--quality [fast|balanced|accurate]` (default: balanced) - **fast**: Quick analysis, less precise (~50% faster) - **balanced**: Good accuracy/speed tradeoff (recommended) - **accurate**: Research-grade analysis, slower (maximum precision) - `--verbose` / `-v` - Show auto-selected parameters and analysis details - Useful for understanding what the tool is doing - `--output <path>` / `-o` - Generate annotated debug video with pose tracking visualization - `--json-output <path>` / `-j` - Save metrics to JSON file instead of stdout ### Expert Overrides (Rarely Needed) For advanced users who need manual control: - `--drop-start-frame <int>`: Manually specify where drop begins (if auto-detection fails) - `--smoothing-window <int>`: Override auto-tuned smoothing window - `--velocity-threshold <float>`: Override auto-tuned velocity threshold - `--min-contact-frames <int>`: Override auto-tuned minimum contact frames - `--visibility-threshold <float>`: Override visibility threshold - `--detection-confidence <float>`: Override MediaPipe detection confidence - `--tracking-confidence <float>`: Override MediaPipe tracking confidence > **📖 For detailed parameter explanations, see [docs/reference/parameters.md](docs/reference/parameters.md)** > > **Note:** Most users never need expert parameters - auto-tuning handles optimization automatically! ## Output Format ### Drop Jump JSON Output ```json { "data": { "ground_contact_time_ms": 245.67, "flight_time_ms": 456.78, "jump_height_m": 0.339, "jump_height_kinematic_m": 0.339, "jump_height_trajectory_normalized": 0.0845, "contact_start_frame": 45, "contact_end_frame": 67, "flight_start_frame": 68, "flight_end_frame": 95, "peak_height_frame": 82, "contact_start_frame_precise": 45.234, "contact_end_frame_precise": 67.891, "flight_start_frame_precise": 68.123, "flight_end_frame_precise": 94.567 }, "metadata": { "quality": { }, "processing_info": { } } } ``` **Data Fields**: - `ground_contact_time_ms`: Duration of ground contact phase in milliseconds - `flight_time_ms`: Duration of flight phase in milliseconds - `jump_height_m`: Jump height calculated from flight time: h = g × t² / 8 - `jump_height_kinematic_m`: Kinematic estimate (same as `jump_height_m`) - `jump_height_trajectory_normalized`: Position-based measurement in normalized coordinates (0-1 range) - `contact_start_frame`: Frame index where contact begins (integer, for visualization) - `contact_end_frame`: Frame index where contact ends (integer, for visualization) - `flight_start_frame`: Frame index where flight begins (integer, for visualization) - `flight_end_frame`: Frame index where flight ends (integer, for visualization) - `peak_height_frame`: Frame index at maximum jump height (integer, for visualization) - `contact_start_frame_precise`: Sub-frame precise timing for contact start (fractional, for calculations) - `contact_end_frame_precise`: Sub-frame precise timing for contact end (fractional, for calculations) - `flight_start_frame_precise`: Sub-frame precise timing for flight start (fractional, for calculations) - `flight_end_frame_precise`: Sub-frame precise timing for flight end (fractional, for calculations) **Note**: Integer frame indices are provided for visualization in debug videos. Precise fractional frames are used for all timing calculations and provide sub-frame accuracy (±10ms at 30fps). ### CMJ JSON Output ```json { "data": { "jump_height_m": 0.352, "flight_time_ms": 534.2, "countermovement_depth_m": 0.285, "eccentric_duration_ms": 612.5, "concentric_duration_ms": 321.8, "total_movement_time_ms": 934.3, "peak_eccentric_velocity_m_s": -2.145, "peak_concentric_velocity_m_s": 3.789, "transition_time_ms": 125.4, "standing_start_frame": 12.5, "lowest_point_frame": 45.2, "takeoff_frame": 67.8, "landing_frame": 102.3, "tracking_method": "foot" }, "metadata": { "quality": { }, "processing_info": { } } } ``` **Data Fields**: - `jump_height_m`: Jump height calculated from flight time: h = g × t² / 8 - `flight_time_ms`: Duration of flight phase in milliseconds - `countermovement_depth_m`: Maximum downward displacement during eccentric (descent) phase - `eccentric_duration_ms`: Time from start of countermovement to lowest point - `concentric_duration_ms`: Time from lowest point to takeoff - `total_movement_time_ms`: Total time from countermovement start to takeoff - `peak_eccentric_velocity_m_s`: Maximum downward velocity during descent (negative value) - `peak_concentric_velocity_m_s`: Maximum upward velocity during propulsion (positive value) - `transition_time_ms`: Duration at lowest point (amortization phase between descent and propulsion) - `standing_start_frame`: Frame where standing phase ends and countermovement begins - `lowest_point_frame`: Frame at the lowest point of the countermovement - `takeoff_frame`: Frame where athlete leaves ground - `landing_frame`: Frame where athlete lands after jump - `tracking_method`: Tracking method used - "foot" (foot landmarks) or "com" (center of mass estimation) ### Debug Video The debug video includes: - **Green circle**: Average foot position when on ground - **Red circle**: Average foot position when in air - **Yellow circles**: Individual foot landmarks (ankles, heels) - **State indicator**: Current contact state (on_ground/in_air) - **Phase labels**: "GROUND CONTACT" and "FLIGHT PHASE" during relevant periods - **Peak marker**: "PEAK HEIGHT" at maximum jump height - **Frame number**: Current frame index ## Troubleshooting ### Poor Tracking Quality **Symptoms**: Erratic landmark positions, missing detections, incorrect contact states **Solutions**: 1. **Check video quality**: Ensure the athlete is clearly visible in profile view 2. **Increase smoothing**: Use `--smoothing-window 7` or higher 3. **Adjust detection confidence**: Try `--detection-confidence 0.6` or `--tracking-confidence 0.6` 4. **Generate debug video**: Use `--output` to visualize what's being tracked ### No Pose Detected **Symptoms**: "No frames processed" error or all null landmarks **Solutions**: 1. **Verify video format**: OpenCV must be able to read the video 2. **Check framing**: Ensure full body is visible in side view 3. **Lower confidence thresholds**: Try `--detection-confidence 0.3 --tracking-confidence 0.3` 4. **Test video playback**: Verify video opens correctly with standard video players ### Incorrect Contact Detection **Symptoms**: Wrong ground contact times, flight phases not detected **Solutions**: 1. **Generate debug video**: Visualize contact states to diagnose the issue 2. **Adjust velocity threshold**: - If missing contacts: decrease to `--velocity-threshold 0.01` - If false contacts: increase to `--velocity-threshold 0.03` 3. **Adjust minimum frames**: `--min-contact-frames 5` for longer required contact 4. **Check visibility**: Lower `--visibility-threshold 0.3` if feet are partially obscured ### Jump Height Seems Wrong **Symptoms**: Unrealistic jump height values **Solutions**: 1. **Check video quality**: Ensure video frame rate is adequate (30fps or higher recommended) 2. **Verify flight time detection**: Check `flight_start_frame` and `flight_end_frame` in JSON 3. **Compare measurements**: JSON output includes both `jump_height_m` (primary) and `jump_height_kinematic_m` (kinematic-only) 4. **Check for drop jump detection**: If doing a drop jump, ensure first phase is elevated enough (>5% of frame height) ### Video Codec Issues **Symptoms**: Cannot write debug video or corrupted output **Solutions**: 1. **Install additional codecs**: Ensure OpenCV has proper video codec support 2. **Try different output format**: Use `.avi` extension instead of `.mp4` 3. **Check output path**: Ensure write permissions for output directory ## How It Works 1. **Pose Tracking**: MediaPipe extracts 2D pose landmarks (foot points: ankles, heels, foot indices) from each frame 2. **Position Calculation**: Averages ankle, heel, and foot index positions to determine foot location 3. **Smoothing**: Savitzky-Golay filter reduces tracking jitter while preserving motion dynamics 4. **Contact Detection**: Analyzes vertical position velocity to identify ground contact vs. flight phases 5. **Phase Identification**: Finds continuous ground contact and flight periods - Automatically detects drop jumps vs regular jumps - For drop jumps: identifies box → drop → ground contact → jump sequence 6. **Sub-Frame Interpolation**: Estimates exact transition times between frames - Uses Savitzky-Golay derivative for smooth velocity calculation - Linear interpolation of velocity to find threshold crossings - Achieves sub-millisecond timing precision (at 30fps: ±10ms vs ±33ms) - Reduces timing error by 60-70% for contact and flight measurements - Smoother velocity curves eliminate false threshold crossings 7. **Trajectory Curvature Analysis**: Refines transitions using acceleration patterns - Computes second derivative (acceleration) from position trajectory - Detects landing impact by acceleration spike - Identifies takeoff by acceleration change patterns - Provides independent validation and refinement of velocity-based detection 8. **Metric Calculation**: - Ground contact time = contact phase duration (using fractional frames) - Flight time = flight phase duration (using fractional frames) - Jump height = kinematic estimate from flight time: (g × t²) / 8 ## Development ### Code Quality Standards This project enforces strict code quality standards: - **Type safety**: Full pyright strict mode compliance with complete type annotations - **Linting**: Comprehensive ruff checks (pycodestyle, pyflakes, isort, pep8-naming, etc.) - **Formatting**: Black code style - **Testing**: pytest with 261 comprehensive tests (74.27% coverage) - **PEP 561 compliant**: Includes py.typed marker for type checking support ### Development Commands ```bash # Run the tool uv run kinemotion dropjump-analyze <video_path> # Run all tests uv run pytest # Run tests with verbose output uv run pytest -v # Format code uv run black src/ # Lint code uv run ruff check # Auto-fix linting issues uv run ruff check --fix # Type check uv run pyright # Run all checks uv run ruff check && uv run pyright && uv run pytest ``` ### Contributing Before committing code, ensure all checks pass: 1. Format with Black 2. Fix linting issues with ruff 3. Ensure type safety with pyright 4. Run all tests with pytest See [CONTRIBUTING.md](CONTRIBUTING.md) for contribution guidelines and requirements, or [CLAUDE.md](CLAUDE.md) for detailed development guidelines. ## Limitations - **2D Analysis**: Only analyzes motion in the camera's view plane - **Validation Status**: ⚠️ Accuracy has not been validated against gold standard measurements (force plates, 3D motion capture) - **Side View Required**: Must film from the side to accurately track vertical motion - **Single Athlete**: Designed for analyzing one athlete at a time - **Timing precision**: - 30fps videos: ±10ms with sub-frame interpolation (vs ±33ms without) - 60fps videos: ±5ms with sub-frame interpolation (vs ±17ms without) - Higher frame rates still beneficial for better temporal resolution - **Drop jump detection**: Requires first ground phase to be >5% higher than second ground phase ## Future Enhancements - Advanced camera calibration (intrinsic parameters, lens distortion) - Multi-angle analysis support - Automatic camera orientation detection - Real-time analysis from webcam - Comparison with reference values - Force plate integration for validation ## License MIT License - feel free to use for personal experiments and research.
text/markdown
null
Sebastian Otaegui <feniix@gmail.com>
null
null
MIT
athletic-performance, drop-jump, kinemetry, kinemotion, mediapipe, pose-tracking, video-analysis
[ "Development Status :: 3 - Alpha", "Intended Audience :: Science/Research", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: Multimedi...
[]
null
null
<3.13,>=3.10
[]
[]
[]
[ "click>=8.1.7", "mediapipe>=0.10.30", "numpy>=1.26.0", "opencv-python>=4.9.0", "platformdirs>=4.0.0", "scipy>=1.11.0", "tqdm>=4.67.1", "typing-extensions>=4.15.0" ]
[]
[]
[]
[ "Homepage, https://github.com/feniix/kinemotion", "Repository, https://github.com/feniix/kinemotion", "Source, https://github.com/feniix/kinemotion", "Issues, https://github.com/feniix/kinemotion/issues" ]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:30:15.558309
kinemotion-0.76.2-py3-none-any.whl
5,107,595
75/dc/0c0354438c3bf9bcf88446c9320aadfffc6b6d2be86a296c6a83fac238b4/kinemotion-0.76.2-py3-none-any.whl
py3
bdist_wheel
null
false
f650dd09a7be0485eedbcf59e1c84148
16ab7f6a6d172b0147cc0d619d16da6ff3f270854e1ecd2f5fbc58933bb49db8
75dc0c0354438c3bf9bcf88446c9320aadfffc6b6d2be86a296c6a83fac238b4
null
[ "LICENSE" ]
2.1
odoo-addons-oca-web
18.0.20260115.0
null
null
null
null
null
null
null
null
null
[ "Programming Language :: Python", "Framework :: Odoo", "Framework :: Odoo :: 18.0" ]
[]
null
null
null
[]
[]
[]
[ "odoo-addon-web-calendar-slot-duration==18.0.*", "odoo-addon-web-chatter-position==18.0.*", "odoo-addon-web-company-color==18.0.*", "odoo-addon-web-copy-confirm==18.0.*", "odoo-addon-web-dark-mode==18.0.*", "odoo-addon-web-datetime-picker-default-time==18.0.*", "odoo-addon-web-dialog-size==18.0.*", "o...
[]
[]
[]
[]
twine/6.2.0 CPython/3.12.3
2026-01-16T04:30:15.690914
odoo_addons_oca_web-18.0.20260115.0-py3-none-any.whl
1,807
0e/b1/e86b2ee865c7d5a48d0cb6e53b3bba535a6a11917795183bd595647cbaff/odoo_addons_oca_web-18.0.20260115.0-py3-none-any.whl
py3
bdist_wheel
null
false
eed879fbaa9a4703fc67727041836457
2a5084ad984191fb78934980c90b79a5deb5ed39193e51f0af4243d5daea169e
0eb1e86b2ee865c7d5a48d0cb6e53b3bba535a6a11917795183bd595647cbaff
null
[]
2.4
kis-wrapper
0.2.0
Korea Investment & Securities API wrapper for Python
# KIS Wrapper 한국투자증권 Open API Python SDK ## 특징 - 간결한 함수 기반 API - 국내/해외주식 지원 - 실시간 WebSocket - 모의투자/실전 환경 전환 - 자동 토큰 관리 ## 설치 ```bash pip install kis ``` 또는 개발 환경: ```bash git clone https://github.com/your-repo/kis-wrapper cd kis-wrapper uv sync ``` ## 빠른 시작 ### 환경 설정 ```bash # .env KIS_APP_KEY=your_app_key KIS_APP_SECRET=your_app_secret KIS_ACCOUNT=12345678-01 ``` ### 기본 사용 ```python import os from kis import KIS, domestic kis = KIS( app_key=os.environ["KIS_APP_KEY"], app_secret=os.environ["KIS_APP_SECRET"], account=os.environ["KIS_ACCOUNT"], env="paper", # 모의투자 ) # 삼성전자 현재가 p = domestic.price(kis, "005930") print(f"현재가: {p['stck_prpr']}원") # 호가 ob = domestic.orderbook(kis, "005930") # 일봉 (최근 30일) candles = domestic.daily(kis, "005930") ``` ### 주문 ```python # 매수 (지정가) order = domestic.buy(kis, "005930", qty=10, price=70000) print(f"주문번호: {order['ODNO']}") # 매수 (시장가) order = domestic.buy(kis, "005930", qty=10) # 매도 order = domestic.sell(kis, "005930", qty=5, price=72000) # 주문 취소 domestic.cancel(kis, order_no="0001234567", qty=5) # 주문 정정 domestic.modify(kis, order_no="0001234567", qty=10, price=71000) ``` ### 계좌 조회 ```python # 잔고 (예수금 + 보유종목) bal = domestic.balance(kis) # 보유종목만 positions = domestic.positions(kis) for p in positions: print(f"{p['prdt_name']}: {p['hldg_qty']}주") # 특정 종목 포지션 pos = domestic.position(kis, "005930") if pos: print(f"수익률: {pos['profit_rate']:.2f}%") # 미체결 주문 pending = domestic.pending_orders(kis) ``` ### 해외주식 ```python from kis import overseas # 애플 현재가 p = overseas.price(kis, "AAPL", "NAS") print(f"AAPL: ${p['last']}") # 매수 (지정가) order = overseas.buy(kis, "AAPL", "NAS", qty=1, price=150.00) # 매수 (시장가 - 미국만 지원) order = overseas.buy(kis, "AAPL", "NAS", qty=1) # 잔고 조회 bal = overseas.balance(kis) # 전체 bal = overseas.balance(kis, "NAS") # 나스닥만 # 환율 rate = overseas.exchange_rate(kis) ``` #### 거래소 코드 | 코드 | 거래소 | |------|--------| | NYS | 뉴욕 (NYSE) | | NAS | 나스닥 (NASDAQ) | | AMS | 아멕스 (AMEX) | | HKS | 홍콩 | | SHS | 상해 | | SZS | 심천 | | TSE | 도쿄 | | HNX | 하노이 | | HSX | 호치민 | ### 실시간 데이터 (WebSocket) ```python import asyncio from kis import KIS, WSClient async def main(): kis = KIS(app_key, app_secret, account, env="paper") ws = WSClient(kis) async def on_price(data): print(f"{data['symbol']}: {data['price']:,}원 (거래량: {data['volume']})") await ws.subscribe("H0STCNT0", ["005930", "000660"], on_price) try: await ws.run() except KeyboardInterrupt: await ws.close() asyncio.run(main()) ``` #### TR ID 목록 | TR ID | 설명 | |-------|------| | H0STCNT0 | 국내주식 실시간체결 | | H0STASP0 | 국내주식 실시간호가 | | H0STCNI0 | 체결통보 | | HDFSCNT0 | 해외주식 실시간체결 | ### 환경 전환 ```python # 모의투자 -> 실전 kis_prod = kis.switch("prod") # 또는 처음부터 실전 kis = KIS(app_key, app_secret, account, env="prod") ``` ### 계산 유틸리티 ```python from kis import calc # 수익률 rate = calc.profit_rate(buy_price=70000, current_price=75000) print(f"수익률: {float(rate) * 100:.2f}%") # 수익금 profit = calc.profit_amount(70000, 75000, qty=10) # 평균단가 orders = [{"price": 70000, "qty": 10}, {"price": 72000, "qty": 5}] avg = calc.avg_price(orders) ``` ### 스냅샷 ```python from kis import snapshot # 현재 상태 저장 data = snapshot.snapshot(kis, "005930") snapshot.save(data, "snapshots/005930.json") # 로드 및 검증 loaded = snapshot.load("snapshots/005930.json") assert snapshot.verify(loaded) ``` ## API 레퍼런스 ### KIS 클래스 ```python KIS(app_key: str, app_secret: str, account: str, env: Env = "paper") ``` | 속성/메서드 | 설명 | |-------------|------| | `is_paper` | 모의투자 여부 | | `switch(env)` | 환경 전환 | | `close()` | 연결 종료 | ### domestic 모듈 | 함수 | 설명 | |------|------| | `price(kis, symbol)` | 현재가 조회 | | `orderbook(kis, symbol)` | 호가 조회 | | `daily(kis, symbol, period="D")` | 일/주/월봉 | | `buy(kis, symbol, qty, price=None)` | 매수 | | `sell(kis, symbol, qty, price=None)` | 매도 | | `cancel(kis, order_no, qty)` | 취소 | | `modify(kis, order_no, qty, price)` | 정정 | | `balance(kis)` | 잔고 조회 | | `positions(kis)` | 보유종목 | | `orders(kis, start_date, end_date)` | 주문내역 | | `pending_orders(kis)` | 미체결 | | `position(kis, symbol)` | 종목별 포지션 | | `sell_all(kis, symbol)` | 전량 매도 | | `cancel_remaining(kis, order_no)` | 미체결 전량 취소 | ### overseas 모듈 | 함수 | 설명 | |------|------| | `price(kis, symbol, exchange)` | 현재가 조회 | | `daily(kis, symbol, exchange, period="D")` | 기간별 시세 | | `buy(kis, symbol, exchange, qty, price=None)` | 매수 | | `sell(kis, symbol, exchange, qty, price=None)` | 매도 | | `cancel(kis, exchange, order_no, qty)` | 취소 | | `balance(kis, exchange=None)` | 잔고 조회 | | `exchange_rate(kis)` | 환율 조회 | ### WSClient 클래스 ```python WSClient(kis: KIS, max_retries: int = 5, retry_delay: float = 1.0) ``` | 메서드 | 설명 | |--------|------| | `connect()` | WebSocket 연결 | | `subscribe(tr_id, symbols, callback)` | 구독 | | `unsubscribe(tr_id, symbols)` | 구독 해제 | | `run()` | 메시지 수신 루프 | | `close()` | 연결 종료 | ## 개발 ```bash # 테스트 uv run pytest # 커버리지 uv run pytest --cov=kis # 린트 uv run ruff check kis/ # 포맷 uv run ruff format kis/ ``` ## 라이선스 MIT License
text/markdown
null
null
null
null
null
api, kis, korea-investment, stock, trading
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: Office/Business :: Financial :: Investment" ]
[]
null
null
>=3.11
[]
[]
[]
[ "httpx", "pycryptodome", "websockets", "mypy; extra == \"dev\"", "pytest; extra == \"dev\"", "pytest-asyncio; extra == \"dev\"", "pytest-httpx; extra == \"dev\"", "ruff; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/LaytonAI/kis-wrapper", "Documentation, https://github.com/LaytonAI/kis-wrapper#readme", "Issues, https://github.com/LaytonAI/kis-wrapper/issues" ]
uv/0.7.5
2026-01-16T04:30:43.219973
kis_wrapper-0.2.0-py3-none-any.whl
16,084
44/bd/1d5c48ad39290e10e640efb2a8d386e4f7de90fc2b049cfafaccc8a3584e/kis_wrapper-0.2.0-py3-none-any.whl
py3
bdist_wheel
null
false
f6abd29e11c23bace67dcb9f8d03c90b
1bc26b54aa839f312251ffa17f2f71f338d499897cd3e8d607b16308ab2b145f
44bd1d5c48ad39290e10e640efb2a8d386e4f7de90fc2b049cfafaccc8a3584e
MIT
[ "LICENSE" ]
2.4
kis-wrapper
0.2.0
Korea Investment & Securities API wrapper for Python
# KIS Wrapper 한국투자증권 Open API Python SDK ## 특징 - 간결한 함수 기반 API - 국내/해외주식 지원 - 실시간 WebSocket - 모의투자/실전 환경 전환 - 자동 토큰 관리 ## 설치 ```bash pip install kis ``` 또는 개발 환경: ```bash git clone https://github.com/your-repo/kis-wrapper cd kis-wrapper uv sync ``` ## 빠른 시작 ### 환경 설정 ```bash # .env KIS_APP_KEY=your_app_key KIS_APP_SECRET=your_app_secret KIS_ACCOUNT=12345678-01 ``` ### 기본 사용 ```python import os from kis import KIS, domestic kis = KIS( app_key=os.environ["KIS_APP_KEY"], app_secret=os.environ["KIS_APP_SECRET"], account=os.environ["KIS_ACCOUNT"], env="paper", # 모의투자 ) # 삼성전자 현재가 p = domestic.price(kis, "005930") print(f"현재가: {p['stck_prpr']}원") # 호가 ob = domestic.orderbook(kis, "005930") # 일봉 (최근 30일) candles = domestic.daily(kis, "005930") ``` ### 주문 ```python # 매수 (지정가) order = domestic.buy(kis, "005930", qty=10, price=70000) print(f"주문번호: {order['ODNO']}") # 매수 (시장가) order = domestic.buy(kis, "005930", qty=10) # 매도 order = domestic.sell(kis, "005930", qty=5, price=72000) # 주문 취소 domestic.cancel(kis, order_no="0001234567", qty=5) # 주문 정정 domestic.modify(kis, order_no="0001234567", qty=10, price=71000) ``` ### 계좌 조회 ```python # 잔고 (예수금 + 보유종목) bal = domestic.balance(kis) # 보유종목만 positions = domestic.positions(kis) for p in positions: print(f"{p['prdt_name']}: {p['hldg_qty']}주") # 특정 종목 포지션 pos = domestic.position(kis, "005930") if pos: print(f"수익률: {pos['profit_rate']:.2f}%") # 미체결 주문 pending = domestic.pending_orders(kis) ``` ### 해외주식 ```python from kis import overseas # 애플 현재가 p = overseas.price(kis, "AAPL", "NAS") print(f"AAPL: ${p['last']}") # 매수 (지정가) order = overseas.buy(kis, "AAPL", "NAS", qty=1, price=150.00) # 매수 (시장가 - 미국만 지원) order = overseas.buy(kis, "AAPL", "NAS", qty=1) # 잔고 조회 bal = overseas.balance(kis) # 전체 bal = overseas.balance(kis, "NAS") # 나스닥만 # 환율 rate = overseas.exchange_rate(kis) ``` #### 거래소 코드 | 코드 | 거래소 | |------|--------| | NYS | 뉴욕 (NYSE) | | NAS | 나스닥 (NASDAQ) | | AMS | 아멕스 (AMEX) | | HKS | 홍콩 | | SHS | 상해 | | SZS | 심천 | | TSE | 도쿄 | | HNX | 하노이 | | HSX | 호치민 | ### 실시간 데이터 (WebSocket) ```python import asyncio from kis import KIS, WSClient async def main(): kis = KIS(app_key, app_secret, account, env="paper") ws = WSClient(kis) async def on_price(data): print(f"{data['symbol']}: {data['price']:,}원 (거래량: {data['volume']})") await ws.subscribe("H0STCNT0", ["005930", "000660"], on_price) try: await ws.run() except KeyboardInterrupt: await ws.close() asyncio.run(main()) ``` #### TR ID 목록 | TR ID | 설명 | |-------|------| | H0STCNT0 | 국내주식 실시간체결 | | H0STASP0 | 국내주식 실시간호가 | | H0STCNI0 | 체결통보 | | HDFSCNT0 | 해외주식 실시간체결 | ### 환경 전환 ```python # 모의투자 -> 실전 kis_prod = kis.switch("prod") # 또는 처음부터 실전 kis = KIS(app_key, app_secret, account, env="prod") ``` ### 계산 유틸리티 ```python from kis import calc # 수익률 rate = calc.profit_rate(buy_price=70000, current_price=75000) print(f"수익률: {float(rate) * 100:.2f}%") # 수익금 profit = calc.profit_amount(70000, 75000, qty=10) # 평균단가 orders = [{"price": 70000, "qty": 10}, {"price": 72000, "qty": 5}] avg = calc.avg_price(orders) ``` ### 스냅샷 ```python from kis import snapshot # 현재 상태 저장 data = snapshot.snapshot(kis, "005930") snapshot.save(data, "snapshots/005930.json") # 로드 및 검증 loaded = snapshot.load("snapshots/005930.json") assert snapshot.verify(loaded) ``` ## API 레퍼런스 ### KIS 클래스 ```python KIS(app_key: str, app_secret: str, account: str, env: Env = "paper") ``` | 속성/메서드 | 설명 | |-------------|------| | `is_paper` | 모의투자 여부 | | `switch(env)` | 환경 전환 | | `close()` | 연결 종료 | ### domestic 모듈 | 함수 | 설명 | |------|------| | `price(kis, symbol)` | 현재가 조회 | | `orderbook(kis, symbol)` | 호가 조회 | | `daily(kis, symbol, period="D")` | 일/주/월봉 | | `buy(kis, symbol, qty, price=None)` | 매수 | | `sell(kis, symbol, qty, price=None)` | 매도 | | `cancel(kis, order_no, qty)` | 취소 | | `modify(kis, order_no, qty, price)` | 정정 | | `balance(kis)` | 잔고 조회 | | `positions(kis)` | 보유종목 | | `orders(kis, start_date, end_date)` | 주문내역 | | `pending_orders(kis)` | 미체결 | | `position(kis, symbol)` | 종목별 포지션 | | `sell_all(kis, symbol)` | 전량 매도 | | `cancel_remaining(kis, order_no)` | 미체결 전량 취소 | ### overseas 모듈 | 함수 | 설명 | |------|------| | `price(kis, symbol, exchange)` | 현재가 조회 | | `daily(kis, symbol, exchange, period="D")` | 기간별 시세 | | `buy(kis, symbol, exchange, qty, price=None)` | 매수 | | `sell(kis, symbol, exchange, qty, price=None)` | 매도 | | `cancel(kis, exchange, order_no, qty)` | 취소 | | `balance(kis, exchange=None)` | 잔고 조회 | | `exchange_rate(kis)` | 환율 조회 | ### WSClient 클래스 ```python WSClient(kis: KIS, max_retries: int = 5, retry_delay: float = 1.0) ``` | 메서드 | 설명 | |--------|------| | `connect()` | WebSocket 연결 | | `subscribe(tr_id, symbols, callback)` | 구독 | | `unsubscribe(tr_id, symbols)` | 구독 해제 | | `run()` | 메시지 수신 루프 | | `close()` | 연결 종료 | ## 개발 ```bash # 테스트 uv run pytest # 커버리지 uv run pytest --cov=kis # 린트 uv run ruff check kis/ # 포맷 uv run ruff format kis/ ``` ## 라이선스 MIT License
text/markdown
null
null
null
null
null
api, kis, korea-investment, stock, trading
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: Office/Business :: Financial :: Investment" ]
[]
null
null
>=3.11
[]
[]
[]
[ "httpx", "pycryptodome", "websockets", "mypy; extra == \"dev\"", "pytest; extra == \"dev\"", "pytest-asyncio; extra == \"dev\"", "pytest-httpx; extra == \"dev\"", "ruff; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/LaytonAI/kis-wrapper", "Documentation, https://github.com/LaytonAI/kis-wrapper#readme", "Issues, https://github.com/LaytonAI/kis-wrapper/issues" ]
uv/0.7.5
2026-01-16T04:30:44.667904
kis_wrapper-0.2.0.tar.gz
64,788
3b/fc/053faec283092fc713573df45e4cce9fcc8638160477738a1bdd0402c5ab/kis_wrapper-0.2.0.tar.gz
source
sdist
null
false
6af58ef605149c6f565aac8e37eef4c5
0f0ee6e7b60fc4d9a0786f40ecd75281c5752db4e5d16d2d285e7236badbdd5c
3bfc053faec283092fc713573df45e4cce9fcc8638160477738a1bdd0402c5ab
MIT
[ "LICENSE" ]
2.4
kis-wrapper
0.3.0
Korea Investment & Securities API wrapper for Python
# KIS Wrapper 한국투자증권 Open API Python SDK ## 특징 - 간결한 함수 기반 API - 국내/해외주식 지원 - 실시간 WebSocket - 모의투자/실전 환경 전환 - 자동 토큰 관리 ## 설치 ```bash pip install kis ``` 또는 개발 환경: ```bash git clone https://github.com/your-repo/kis-wrapper cd kis-wrapper uv sync ``` ## 빠른 시작 ### 환경 설정 ```bash # .env KIS_APP_KEY=your_app_key KIS_APP_SECRET=your_app_secret KIS_ACCOUNT=12345678-01 ``` ### 기본 사용 ```python import os from kis import KIS, domestic kis = KIS( app_key=os.environ["KIS_APP_KEY"], app_secret=os.environ["KIS_APP_SECRET"], account=os.environ["KIS_ACCOUNT"], env="paper", # 모의투자 ) # 삼성전자 현재가 p = domestic.price(kis, "005930") print(f"현재가: {p['stck_prpr']}원") # 호가 ob = domestic.orderbook(kis, "005930") # 일봉 (최근 30일) candles = domestic.daily(kis, "005930") ``` ### 주문 ```python # 매수 (지정가) order = domestic.buy(kis, "005930", qty=10, price=70000) print(f"주문번호: {order['ODNO']}") # 매수 (시장가) order = domestic.buy(kis, "005930", qty=10) # 매도 order = domestic.sell(kis, "005930", qty=5, price=72000) # 주문 취소 domestic.cancel(kis, order_no="0001234567", qty=5) # 주문 정정 domestic.modify(kis, order_no="0001234567", qty=10, price=71000) ``` ### 계좌 조회 ```python # 잔고 (예수금 + 보유종목) bal = domestic.balance(kis) # 보유종목만 positions = domestic.positions(kis) for p in positions: print(f"{p['prdt_name']}: {p['hldg_qty']}주") # 특정 종목 포지션 pos = domestic.position(kis, "005930") if pos: print(f"수익률: {pos['profit_rate']:.2f}%") # 미체결 주문 pending = domestic.pending_orders(kis) ``` ### 해외주식 ```python from kis import overseas # 애플 현재가 p = overseas.price(kis, "AAPL", "NAS") print(f"AAPL: ${p['last']}") # 매수 (지정가) order = overseas.buy(kis, "AAPL", "NAS", qty=1, price=150.00) # 매수 (시장가 - 미국만 지원) order = overseas.buy(kis, "AAPL", "NAS", qty=1) # 잔고 조회 bal = overseas.balance(kis) # 전체 bal = overseas.balance(kis, "NAS") # 나스닥만 # 환율 rate = overseas.exchange_rate(kis) ``` #### 거래소 코드 | 코드 | 거래소 | |------|--------| | NYS | 뉴욕 (NYSE) | | NAS | 나스닥 (NASDAQ) | | AMS | 아멕스 (AMEX) | | HKS | 홍콩 | | SHS | 상해 | | SZS | 심천 | | TSE | 도쿄 | | HNX | 하노이 | | HSX | 호치민 | ### 실시간 데이터 (WebSocket) ```python import asyncio from kis import KIS, WSClient async def main(): kis = KIS(app_key, app_secret, account, env="paper") ws = WSClient(kis) async def on_price(data): print(f"{data['symbol']}: {data['price']:,}원 (거래량: {data['volume']})") await ws.subscribe("H0STCNT0", ["005930", "000660"], on_price) try: await ws.run() except KeyboardInterrupt: await ws.close() asyncio.run(main()) ``` #### TR ID 목록 | TR ID | 설명 | |-------|------| | H0STCNT0 | 국내주식 실시간체결 | | H0STASP0 | 국내주식 실시간호가 | | H0STCNI0 | 체결통보 | | HDFSCNT0 | 해외주식 실시간체결 | ### 환경 전환 ```python # 모의투자 -> 실전 kis_prod = kis.switch("prod") # 또는 처음부터 실전 kis = KIS(app_key, app_secret, account, env="prod") ``` ### 계산 유틸리티 ```python from kis import calc # 수익률 rate = calc.profit_rate(buy_price=70000, current_price=75000) print(f"수익률: {float(rate) * 100:.2f}%") # 수익금 profit = calc.profit_amount(70000, 75000, qty=10) # 평균단가 orders = [{"price": 70000, "qty": 10}, {"price": 72000, "qty": 5}] avg = calc.avg_price(orders) ``` ### 스냅샷 ```python from kis import snapshot # 현재 상태 저장 data = snapshot.snapshot(kis, "005930") snapshot.save(data, "snapshots/005930.json") # 로드 및 검증 loaded = snapshot.load("snapshots/005930.json") assert snapshot.verify(loaded) ``` ## API 레퍼런스 ### KIS 클래스 ```python KIS(app_key: str, app_secret: str, account: str, env: Env = "paper") ``` | 속성/메서드 | 설명 | |-------------|------| | `is_paper` | 모의투자 여부 | | `switch(env)` | 환경 전환 | | `close()` | 연결 종료 | ### domestic 모듈 | 함수 | 설명 | |------|------| | `price(kis, symbol)` | 현재가 조회 | | `orderbook(kis, symbol)` | 호가 조회 | | `daily(kis, symbol, period="D")` | 일/주/월봉 | | `buy(kis, symbol, qty, price=None)` | 매수 | | `sell(kis, symbol, qty, price=None)` | 매도 | | `cancel(kis, order_no, qty)` | 취소 | | `modify(kis, order_no, qty, price)` | 정정 | | `balance(kis)` | 잔고 조회 | | `positions(kis)` | 보유종목 | | `orders(kis, start_date, end_date)` | 주문내역 | | `pending_orders(kis)` | 미체결 | | `position(kis, symbol)` | 종목별 포지션 | | `sell_all(kis, symbol)` | 전량 매도 | | `cancel_remaining(kis, order_no)` | 미체결 전량 취소 | ### overseas 모듈 | 함수 | 설명 | |------|------| | `price(kis, symbol, exchange)` | 현재가 조회 | | `daily(kis, symbol, exchange, period="D")` | 기간별 시세 | | `buy(kis, symbol, exchange, qty, price=None)` | 매수 | | `sell(kis, symbol, exchange, qty, price=None)` | 매도 | | `cancel(kis, exchange, order_no, qty)` | 취소 | | `balance(kis, exchange=None)` | 잔고 조회 | | `exchange_rate(kis)` | 환율 조회 | ### WSClient 클래스 ```python WSClient(kis: KIS, max_retries: int = 5, retry_delay: float = 1.0) ``` | 메서드 | 설명 | |--------|------| | `connect()` | WebSocket 연결 | | `subscribe(tr_id, symbols, callback)` | 구독 | | `unsubscribe(tr_id, symbols)` | 구독 해제 | | `run()` | 메시지 수신 루프 | | `close()` | 연결 종료 | ## 개발 ```bash # 테스트 uv run pytest # 커버리지 uv run pytest --cov=kis # 린트 uv run ruff check kis/ # 포맷 uv run ruff format kis/ ``` ## 라이선스 MIT License
text/markdown
null
null
null
null
null
api, kis, korea-investment, stock, trading
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: Office/Business :: Financial :: Investment" ]
[]
null
null
>=3.11
[]
[]
[]
[ "httpx", "pycryptodome", "websockets", "mypy; extra == \"dev\"", "pytest; extra == \"dev\"", "pytest-asyncio; extra == \"dev\"", "pytest-httpx; extra == \"dev\"", "ruff; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/LaytonAI/kis-wrapper", "Documentation, https://github.com/LaytonAI/kis-wrapper#readme", "Issues, https://github.com/LaytonAI/kis-wrapper/issues" ]
uv/0.7.5
2026-01-16T04:30:45.665794
kis_wrapper-0.3.0-py3-none-any.whl
17,473
f7/66/caccd627fe7d4913ef5ff1bf2e24718c2c726f4ffdf7f9f2d86f830b7464/kis_wrapper-0.3.0-py3-none-any.whl
py3
bdist_wheel
null
false
9accd3127fa31ff168a9929a40526070
14533b3898ed17cf330165968db014f5c51a4a0b8892b916c21fa5ae059aa1ff
f766caccd627fe7d4913ef5ff1bf2e24718c2c726f4ffdf7f9f2d86f830b7464
MIT
[ "LICENSE" ]
2.4
kis-wrapper
0.3.0
Korea Investment & Securities API wrapper for Python
# KIS Wrapper 한국투자증권 Open API Python SDK ## 특징 - 간결한 함수 기반 API - 국내/해외주식 지원 - 실시간 WebSocket - 모의투자/실전 환경 전환 - 자동 토큰 관리 ## 설치 ```bash pip install kis ``` 또는 개발 환경: ```bash git clone https://github.com/your-repo/kis-wrapper cd kis-wrapper uv sync ``` ## 빠른 시작 ### 환경 설정 ```bash # .env KIS_APP_KEY=your_app_key KIS_APP_SECRET=your_app_secret KIS_ACCOUNT=12345678-01 ``` ### 기본 사용 ```python import os from kis import KIS, domestic kis = KIS( app_key=os.environ["KIS_APP_KEY"], app_secret=os.environ["KIS_APP_SECRET"], account=os.environ["KIS_ACCOUNT"], env="paper", # 모의투자 ) # 삼성전자 현재가 p = domestic.price(kis, "005930") print(f"현재가: {p['stck_prpr']}원") # 호가 ob = domestic.orderbook(kis, "005930") # 일봉 (최근 30일) candles = domestic.daily(kis, "005930") ``` ### 주문 ```python # 매수 (지정가) order = domestic.buy(kis, "005930", qty=10, price=70000) print(f"주문번호: {order['ODNO']}") # 매수 (시장가) order = domestic.buy(kis, "005930", qty=10) # 매도 order = domestic.sell(kis, "005930", qty=5, price=72000) # 주문 취소 domestic.cancel(kis, order_no="0001234567", qty=5) # 주문 정정 domestic.modify(kis, order_no="0001234567", qty=10, price=71000) ``` ### 계좌 조회 ```python # 잔고 (예수금 + 보유종목) bal = domestic.balance(kis) # 보유종목만 positions = domestic.positions(kis) for p in positions: print(f"{p['prdt_name']}: {p['hldg_qty']}주") # 특정 종목 포지션 pos = domestic.position(kis, "005930") if pos: print(f"수익률: {pos['profit_rate']:.2f}%") # 미체결 주문 pending = domestic.pending_orders(kis) ``` ### 해외주식 ```python from kis import overseas # 애플 현재가 p = overseas.price(kis, "AAPL", "NAS") print(f"AAPL: ${p['last']}") # 매수 (지정가) order = overseas.buy(kis, "AAPL", "NAS", qty=1, price=150.00) # 매수 (시장가 - 미국만 지원) order = overseas.buy(kis, "AAPL", "NAS", qty=1) # 잔고 조회 bal = overseas.balance(kis) # 전체 bal = overseas.balance(kis, "NAS") # 나스닥만 # 환율 rate = overseas.exchange_rate(kis) ``` #### 거래소 코드 | 코드 | 거래소 | |------|--------| | NYS | 뉴욕 (NYSE) | | NAS | 나스닥 (NASDAQ) | | AMS | 아멕스 (AMEX) | | HKS | 홍콩 | | SHS | 상해 | | SZS | 심천 | | TSE | 도쿄 | | HNX | 하노이 | | HSX | 호치민 | ### 실시간 데이터 (WebSocket) ```python import asyncio from kis import KIS, WSClient async def main(): kis = KIS(app_key, app_secret, account, env="paper") ws = WSClient(kis) async def on_price(data): print(f"{data['symbol']}: {data['price']:,}원 (거래량: {data['volume']})") await ws.subscribe("H0STCNT0", ["005930", "000660"], on_price) try: await ws.run() except KeyboardInterrupt: await ws.close() asyncio.run(main()) ``` #### TR ID 목록 | TR ID | 설명 | |-------|------| | H0STCNT0 | 국내주식 실시간체결 | | H0STASP0 | 국내주식 실시간호가 | | H0STCNI0 | 체결통보 | | HDFSCNT0 | 해외주식 실시간체결 | ### 환경 전환 ```python # 모의투자 -> 실전 kis_prod = kis.switch("prod") # 또는 처음부터 실전 kis = KIS(app_key, app_secret, account, env="prod") ``` ### 계산 유틸리티 ```python from kis import calc # 수익률 rate = calc.profit_rate(buy_price=70000, current_price=75000) print(f"수익률: {float(rate) * 100:.2f}%") # 수익금 profit = calc.profit_amount(70000, 75000, qty=10) # 평균단가 orders = [{"price": 70000, "qty": 10}, {"price": 72000, "qty": 5}] avg = calc.avg_price(orders) ``` ### 스냅샷 ```python from kis import snapshot # 현재 상태 저장 data = snapshot.snapshot(kis, "005930") snapshot.save(data, "snapshots/005930.json") # 로드 및 검증 loaded = snapshot.load("snapshots/005930.json") assert snapshot.verify(loaded) ``` ## API 레퍼런스 ### KIS 클래스 ```python KIS(app_key: str, app_secret: str, account: str, env: Env = "paper") ``` | 속성/메서드 | 설명 | |-------------|------| | `is_paper` | 모의투자 여부 | | `switch(env)` | 환경 전환 | | `close()` | 연결 종료 | ### domestic 모듈 | 함수 | 설명 | |------|------| | `price(kis, symbol)` | 현재가 조회 | | `orderbook(kis, symbol)` | 호가 조회 | | `daily(kis, symbol, period="D")` | 일/주/월봉 | | `buy(kis, symbol, qty, price=None)` | 매수 | | `sell(kis, symbol, qty, price=None)` | 매도 | | `cancel(kis, order_no, qty)` | 취소 | | `modify(kis, order_no, qty, price)` | 정정 | | `balance(kis)` | 잔고 조회 | | `positions(kis)` | 보유종목 | | `orders(kis, start_date, end_date)` | 주문내역 | | `pending_orders(kis)` | 미체결 | | `position(kis, symbol)` | 종목별 포지션 | | `sell_all(kis, symbol)` | 전량 매도 | | `cancel_remaining(kis, order_no)` | 미체결 전량 취소 | ### overseas 모듈 | 함수 | 설명 | |------|------| | `price(kis, symbol, exchange)` | 현재가 조회 | | `daily(kis, symbol, exchange, period="D")` | 기간별 시세 | | `buy(kis, symbol, exchange, qty, price=None)` | 매수 | | `sell(kis, symbol, exchange, qty, price=None)` | 매도 | | `cancel(kis, exchange, order_no, qty)` | 취소 | | `balance(kis, exchange=None)` | 잔고 조회 | | `exchange_rate(kis)` | 환율 조회 | ### WSClient 클래스 ```python WSClient(kis: KIS, max_retries: int = 5, retry_delay: float = 1.0) ``` | 메서드 | 설명 | |--------|------| | `connect()` | WebSocket 연결 | | `subscribe(tr_id, symbols, callback)` | 구독 | | `unsubscribe(tr_id, symbols)` | 구독 해제 | | `run()` | 메시지 수신 루프 | | `close()` | 연결 종료 | ## 개발 ```bash # 테스트 uv run pytest # 커버리지 uv run pytest --cov=kis # 린트 uv run ruff check kis/ # 포맷 uv run ruff format kis/ ``` ## 라이선스 MIT License
text/markdown
null
null
null
null
null
api, kis, korea-investment, stock, trading
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: Office/Business :: Financial :: Investment" ]
[]
null
null
>=3.11
[]
[]
[]
[ "httpx", "pycryptodome", "websockets", "mypy; extra == \"dev\"", "pytest; extra == \"dev\"", "pytest-asyncio; extra == \"dev\"", "pytest-httpx; extra == \"dev\"", "ruff; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/LaytonAI/kis-wrapper", "Documentation, https://github.com/LaytonAI/kis-wrapper#readme", "Issues, https://github.com/LaytonAI/kis-wrapper/issues" ]
uv/0.7.5
2026-01-16T04:30:47.300206
kis_wrapper-0.3.0.tar.gz
65,871
54/42/46e9464fc75886818671b2b78a27ec2bb4e216601c4f0f5eeae509b8228d/kis_wrapper-0.3.0.tar.gz
source
sdist
null
false
7135270d42acbaae59f29983db96c1ac
974e1ec038e4a11672ac3a267f881fca5fdf3ea2ac988e9eb902bfec5349d605
544246e9464fc75886818671b2b78a27ec2bb4e216601c4f0f5eeae509b8228d
MIT
[ "LICENSE" ]
2.4
ccburn
0.3.0
Terminal-based Claude Code usage limit visualizer with real-time burn-up charts
# 🔥 ccburn [![CI](https://img.shields.io/github/actions/workflow/status/JuanjoFuchs/ccburn/ci.yml?branch=main&label=CI)](https://github.com/JuanjoFuchs/ccburn/actions/workflows/ci.yml) [![Release](https://img.shields.io/github/actions/workflow/status/JuanjoFuchs/ccburn/release.yml?label=Release)](https://github.com/JuanjoFuchs/ccburn/actions/workflows/release.yml) [![npm](https://img.shields.io/npm/v/ccburn)](https://www.npmjs.com/package/ccburn) [![PyPI](https://img.shields.io/pypi/v/ccburn)](https://pypi.org/project/ccburn/) [![Python](https://img.shields.io/pypi/pyversions/ccburn)](https://pypi.org/project/ccburn/) [![GitHub Release](https://img.shields.io/github/v/release/JuanjoFuchs/ccburn)](https://github.com/JuanjoFuchs/ccburn/releases) [![WinGet](https://img.shields.io/winget/v/JuanjoFuchs.ccburn)](https://winstall.app/apps/JuanjoFuchs.ccburn) [![npm downloads](https://img.shields.io/npm/dt/ccburn?label=npm%20downloads)](https://www.npmjs.com/package/ccburn) [![PyPI downloads](https://img.shields.io/pepy/dt/ccburn?label=pypi%20downloads)](https://pepy.tech/project/ccburn) [![GitHub downloads](https://img.shields.io/github/downloads/JuanjoFuchs/ccburn/total?label=github%20downloads)](https://github.com/JuanjoFuchs/ccburn/releases) [![License](https://img.shields.io/github/license/JuanjoFuchs/ccburn)](LICENSE) <p align="center"> <img src="docs/cash1.png" alt="Burning tokens" width="140"> </p> <p align="center"> <strong>Watch your tokens burn — before you get burned.</strong> </p> TUI and CLI for Claude Code usage limits — burn-up charts, compact mode for status bars, JSON for automation. ![ccburn screenshot](docs/ccburn.png) ## Features - **Real-time burn-up charts** — Visualize session and weekly usage with live-updating terminal graphics - **Pace indicators** — 🧊 Cool. 🔥 On pace. 🚨 Too hot. - **Multiple output modes** — Full TUI, compact single-line for status bars, or JSON for scripting - **Automatic data persistence** — SQLite-backed history for trend analysis - **Dynamic window title** — Terminal tab shows current usage at a glance - **Zoom views** — Focus on recent activity with `--since` ## Installation Run `claude` and login first to refresh credentials. ### WinGet (Windows) ```powershell winget install JuanjoFuchs.ccburn ``` ### npx ```bash npx ccburn ``` ### npm ```bash npm install -g ccburn ``` ### pip ```bash pip install ccburn ``` ### From Source ```bash git clone https://github.com/JuanjoFuchs/ccburn.git cd ccburn pip install -e ".[dev]" ``` ## Quick Start 1. **Run Claude Code first** to ensure credentials are fresh: ```bash claude ``` 2. **Run ccburn:** ```bash ccburn # Session limit (default) ccburn weekly # Weekly limit ccburn weekly-sonnet # Weekly Sonnet limit ``` ## Usage Examples ```bash # Full TUI with burn-up chart (default) ccburn # Weekly usage view ccburn weekly # Compact output for tmux/status bars ccburn --compact # Output: Session: 🔥 45% (2h14m) | Weekly: 🧊 12% | Sonnet: 🧊 3% # JSON output for scripting/automation ccburn --json # Zoom to last 30 minutes ccburn --since 30m # Single snapshot (no live updates) ccburn --once # Custom refresh interval (seconds) ccburn --interval 10 ``` ## Command Line Reference ``` Usage: ccburn [OPTIONS] [LIMIT] Arguments: [LIMIT] Which limit to display [default: session] Options: session, weekly, weekly-sonnet Options: -i, --interval INTEGER Refresh interval in seconds [default: 5/30] -s, --since TEXT Only show data since (e.g., 30m, 2h, 1d) -j, --json Output JSON and exit -o, --once Print once and exit (no live updates) -c, --compact Single-line output for status bars --debug Show debug information --version Show version and exit --help Show this message and exit ``` ## Pace Indicators | Emoji | Status | Meaning | |-------|--------|---------| | 🧊 | Behind pace | Usage below expected budget — you have headroom | | 🔥 | On pace | Usage tracking with expected budget | | 🚨 | Ahead of pace | Usage above expected budget — slow down! | ## Requirements - **Python 3.10+** - **Claude Code** installed with valid credentials - Terminal with Unicode support (for charts and emojis) ## How It Works ccburn reads your Claude Code credentials and fetches usage data from the Anthropic API. It calculates: - **Budget pace** — Where you "should" be based on time elapsed in the window - **Burn rate** — How fast you're consuming your limit - **Time to limit** — Estimated time until you hit 100% (if current rate continues) Data is stored locally in SQLite for historical analysis and to minimize API calls when running multiple instances. ## Troubleshooting ### "Credentials not found" Ensure Claude Code is installed and you've logged in at least once: ```bash claude # This will prompt for login if needed ``` ### Chart not displaying correctly Ensure your terminal supports Unicode and has a monospace font with emoji support. Recommended terminals: - **Windows**: Windows Terminal - **macOS**: iTerm2, Terminal.app - **Linux**: Kitty, Alacritty, GNOME Terminal ### Stale data indicator If you see "(stale)" in the header, ccburn couldn't reach the API. It will continue showing cached data and retry automatically. ## Contributing Contributions are welcome! Please feel free to submit a Pull Request. ## License [MIT](LICENSE) ## Acknowledgments - [Rich](https://github.com/Textualize/rich) — Beautiful terminal formatting - [Plotext](https://github.com/piccolomo/plotext) — Terminal plotting - [Typer](https://github.com/tiangolo/typer) — CLI framework
text/markdown
JuanjoFuchs
null
null
null
null
claude, anthropic, usage, monitoring, tui, visualization, cli
[ "Development Status :: 4 - Beta", "Environment :: Console", "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "P...
[]
null
null
>=3.10
[]
[]
[]
[ "typer[all]>=0.9.0", "rich>=13.0.0", "plotext>=5.2.0", "httpx>=0.25.0", "pytest>=7.0.0; extra == \"dev\"", "pytest-cov>=4.0.0; extra == \"dev\"", "ruff>=0.1.0; extra == \"dev\"", "mypy>=1.0.0; extra == \"dev\"", "build>=1.0.0; extra == \"dev\"", "twine>=4.0.0; extra == \"dev\"", "pyinstaller>=6....
[]
[]
[]
[ "Homepage, https://github.com/JuanjoFuchs/ccburn", "Repository, https://github.com/JuanjoFuchs/ccburn.git", "Issues, https://github.com/JuanjoFuchs/ccburn/issues", "Documentation, https://github.com/JuanjoFuchs/ccburn#readme" ]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:31:07.292351
ccburn-0.3.0-py3-none-any.whl
35,948
72/0f/affb4c8df93a8b31e9bd670b30c24548c1adc9928bac355a256443a162d3/ccburn-0.3.0-py3-none-any.whl
py3
bdist_wheel
null
false
db3f953988cf9c4aafed986404beb918
6f9fab12db62082575ab62fc861fba6d17ddd1a1d9b7157813185ca6cb551b73
720faffb4c8df93a8b31e9bd670b30c24548c1adc9928bac355a256443a162d3
MIT
[ "LICENSE" ]
2.4
ccburn
0.3.0
Terminal-based Claude Code usage limit visualizer with real-time burn-up charts
# 🔥 ccburn [![CI](https://img.shields.io/github/actions/workflow/status/JuanjoFuchs/ccburn/ci.yml?branch=main&label=CI)](https://github.com/JuanjoFuchs/ccburn/actions/workflows/ci.yml) [![Release](https://img.shields.io/github/actions/workflow/status/JuanjoFuchs/ccburn/release.yml?label=Release)](https://github.com/JuanjoFuchs/ccburn/actions/workflows/release.yml) [![npm](https://img.shields.io/npm/v/ccburn)](https://www.npmjs.com/package/ccburn) [![PyPI](https://img.shields.io/pypi/v/ccburn)](https://pypi.org/project/ccburn/) [![Python](https://img.shields.io/pypi/pyversions/ccburn)](https://pypi.org/project/ccburn/) [![GitHub Release](https://img.shields.io/github/v/release/JuanjoFuchs/ccburn)](https://github.com/JuanjoFuchs/ccburn/releases) [![WinGet](https://img.shields.io/winget/v/JuanjoFuchs.ccburn)](https://winstall.app/apps/JuanjoFuchs.ccburn) [![npm downloads](https://img.shields.io/npm/dt/ccburn?label=npm%20downloads)](https://www.npmjs.com/package/ccburn) [![PyPI downloads](https://img.shields.io/pepy/dt/ccburn?label=pypi%20downloads)](https://pepy.tech/project/ccburn) [![GitHub downloads](https://img.shields.io/github/downloads/JuanjoFuchs/ccburn/total?label=github%20downloads)](https://github.com/JuanjoFuchs/ccburn/releases) [![License](https://img.shields.io/github/license/JuanjoFuchs/ccburn)](LICENSE) <p align="center"> <img src="docs/cash1.png" alt="Burning tokens" width="140"> </p> <p align="center"> <strong>Watch your tokens burn — before you get burned.</strong> </p> TUI and CLI for Claude Code usage limits — burn-up charts, compact mode for status bars, JSON for automation. ![ccburn screenshot](docs/ccburn.png) ## Features - **Real-time burn-up charts** — Visualize session and weekly usage with live-updating terminal graphics - **Pace indicators** — 🧊 Cool. 🔥 On pace. 🚨 Too hot. - **Multiple output modes** — Full TUI, compact single-line for status bars, or JSON for scripting - **Automatic data persistence** — SQLite-backed history for trend analysis - **Dynamic window title** — Terminal tab shows current usage at a glance - **Zoom views** — Focus on recent activity with `--since` ## Installation Run `claude` and login first to refresh credentials. ### WinGet (Windows) ```powershell winget install JuanjoFuchs.ccburn ``` ### npx ```bash npx ccburn ``` ### npm ```bash npm install -g ccburn ``` ### pip ```bash pip install ccburn ``` ### From Source ```bash git clone https://github.com/JuanjoFuchs/ccburn.git cd ccburn pip install -e ".[dev]" ``` ## Quick Start 1. **Run Claude Code first** to ensure credentials are fresh: ```bash claude ``` 2. **Run ccburn:** ```bash ccburn # Session limit (default) ccburn weekly # Weekly limit ccburn weekly-sonnet # Weekly Sonnet limit ``` ## Usage Examples ```bash # Full TUI with burn-up chart (default) ccburn # Weekly usage view ccburn weekly # Compact output for tmux/status bars ccburn --compact # Output: Session: 🔥 45% (2h14m) | Weekly: 🧊 12% | Sonnet: 🧊 3% # JSON output for scripting/automation ccburn --json # Zoom to last 30 minutes ccburn --since 30m # Single snapshot (no live updates) ccburn --once # Custom refresh interval (seconds) ccburn --interval 10 ``` ## Command Line Reference ``` Usage: ccburn [OPTIONS] [LIMIT] Arguments: [LIMIT] Which limit to display [default: session] Options: session, weekly, weekly-sonnet Options: -i, --interval INTEGER Refresh interval in seconds [default: 5/30] -s, --since TEXT Only show data since (e.g., 30m, 2h, 1d) -j, --json Output JSON and exit -o, --once Print once and exit (no live updates) -c, --compact Single-line output for status bars --debug Show debug information --version Show version and exit --help Show this message and exit ``` ## Pace Indicators | Emoji | Status | Meaning | |-------|--------|---------| | 🧊 | Behind pace | Usage below expected budget — you have headroom | | 🔥 | On pace | Usage tracking with expected budget | | 🚨 | Ahead of pace | Usage above expected budget — slow down! | ## Requirements - **Python 3.10+** - **Claude Code** installed with valid credentials - Terminal with Unicode support (for charts and emojis) ## How It Works ccburn reads your Claude Code credentials and fetches usage data from the Anthropic API. It calculates: - **Budget pace** — Where you "should" be based on time elapsed in the window - **Burn rate** — How fast you're consuming your limit - **Time to limit** — Estimated time until you hit 100% (if current rate continues) Data is stored locally in SQLite for historical analysis and to minimize API calls when running multiple instances. ## Troubleshooting ### "Credentials not found" Ensure Claude Code is installed and you've logged in at least once: ```bash claude # This will prompt for login if needed ``` ### Chart not displaying correctly Ensure your terminal supports Unicode and has a monospace font with emoji support. Recommended terminals: - **Windows**: Windows Terminal - **macOS**: iTerm2, Terminal.app - **Linux**: Kitty, Alacritty, GNOME Terminal ### Stale data indicator If you see "(stale)" in the header, ccburn couldn't reach the API. It will continue showing cached data and retry automatically. ## Contributing Contributions are welcome! Please feel free to submit a Pull Request. ## License [MIT](LICENSE) ## Acknowledgments - [Rich](https://github.com/Textualize/rich) — Beautiful terminal formatting - [Plotext](https://github.com/piccolomo/plotext) — Terminal plotting - [Typer](https://github.com/tiangolo/typer) — CLI framework
text/markdown
JuanjoFuchs
null
null
null
null
claude, anthropic, usage, monitoring, tui, visualization, cli
[ "Development Status :: 4 - Beta", "Environment :: Console", "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "P...
[]
null
null
>=3.10
[]
[]
[]
[ "typer[all]>=0.9.0", "rich>=13.0.0", "plotext>=5.2.0", "httpx>=0.25.0", "pytest>=7.0.0; extra == \"dev\"", "pytest-cov>=4.0.0; extra == \"dev\"", "ruff>=0.1.0; extra == \"dev\"", "mypy>=1.0.0; extra == \"dev\"", "build>=1.0.0; extra == \"dev\"", "twine>=4.0.0; extra == \"dev\"", "pyinstaller>=6....
[]
[]
[]
[ "Homepage, https://github.com/JuanjoFuchs/ccburn", "Repository, https://github.com/JuanjoFuchs/ccburn.git", "Issues, https://github.com/JuanjoFuchs/ccburn/issues", "Documentation, https://github.com/JuanjoFuchs/ccburn#readme" ]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:31:08.775219
ccburn-0.3.0.tar.gz
37,398
60/7b/f37a625a151cc6fc653375c7d6fe96477967299f81912cdbad8e6914e891/ccburn-0.3.0.tar.gz
source
sdist
null
false
9e471d9c5aa2448fcf5857d47d80af1f
6572c3d9e47bc751c2122bfbabca0cecf8b2bae48e7c33dbdc33ac9cc5db939e
607bf37a625a151cc6fc653375c7d6fe96477967299f81912cdbad8e6914e891
MIT
[ "LICENSE" ]
2.4
pyxecm
3.1.9
A Python library to interact with Opentext Content Management Rest API
# PYXECM A python library to interact with Opentext Content Mangement REST API. The product API documentation is available on [OpenText Developer](https://developer.opentext.com/ce/products/extendedecm) Detailed documentation of this package is available [here](https://opentext.github.io/pyxecm/). ## Quick start - Library usage Install the latest version from pypi: ```bash pip install pyxecm ``` ### Start using the package libraries example usage of the OTCS class, more details can be found in the docs: ```python from pyxecm import OTCS otcs_object = OTCS( protocol="https", hostname="otcs.domain.tld", port="443", public_url="otcs.domain.tld", username="admin", password="********", base_path="/cs/llisapi.dll", ) otcs_object.authenticate() nodes = otcs_object.get_subnodes(2000) for node in nodes["results"]: print(node["data"]["properties"]["id"], node["data"]["properties"]["name"]) ``` ## Quick start - Customizer usage - Create an `.env` file as described here: [sample-environment-variables](customizerapisettings/#sample-environment-variables) - Create an payload file to define what the customizer should do, as described here [payload-syntax](payload-syntax) ```bash pip install pyxecm[customizer] pyxecm-customizer PAYLOAD.tfvars/PAYLOAD.yaml ``` ## Quick start - API - Install pyxecm with api and customizer dependencies - Launch the Rest API server - Access the Customizer API at [http://localhost:8000/api](http://localhost:8000/api) ```bash pip install pyxecm[api,customizer] pyxecm-api ``` ## Disclaimer Copyright © 2025 Open Text Corporation, All Rights Reserved. The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
text/markdown
null
Kai Gatzweiler <kgatzweiler@opentext.com>, "Dr. Marc Diefenbruch" <mdiefenb@opentext.com>
null
null
null
appworks, archivecenter, contentserver, extendedecm, opentext, otds
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: Apache Software License", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Topic :: Internet :: WWW/HTTP :: Dynamic Content :: Content Management System" ]
[]
null
null
>=3.10
[]
[]
[]
[ "lxml>=6.0.0", "opentelemetry-api>=1.34.1", "opentelemetry-exporter-otlp>=1.34.1", "opentelemetry-instrumentation-requests>=0.55b1", "opentelemetry-instrumentation-threading>=0.55b1", "opentelemetry-sdk>=1.34.1", "pandas>=2.3.1", "requests-toolbelt>=1.0.0", "requests>=2.32.4", "suds>=1.2.0", "we...
[]
[]
[]
[ "Homepage, https://github.com/opentext/pyxecm" ]
uv/0.9.25 {"installer":{"name":"uv","version":"0.9.25","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"12","id":"bookworm","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
2026-01-16T04:31:14.823785
pyxecm-3.1.9.tar.gz
633,198
7f/9e/7991c433283ce0894e9f493811f6e33b8359cabb702aedcdd20a22a799be/pyxecm-3.1.9.tar.gz
source
sdist
null
false
aae6c19558c34e08f2228408007b1d62
4d4b18ffd80e50c116649f524ea9bc2ba4579421969299f34f1e984aa007a6fa
7f9e7991c433283ce0894e9f493811f6e33b8359cabb702aedcdd20a22a799be
null
[]
2.4
pyxecm
3.1.9
A Python library to interact with Opentext Content Management Rest API
# PYXECM A python library to interact with Opentext Content Mangement REST API. The product API documentation is available on [OpenText Developer](https://developer.opentext.com/ce/products/extendedecm) Detailed documentation of this package is available [here](https://opentext.github.io/pyxecm/). ## Quick start - Library usage Install the latest version from pypi: ```bash pip install pyxecm ``` ### Start using the package libraries example usage of the OTCS class, more details can be found in the docs: ```python from pyxecm import OTCS otcs_object = OTCS( protocol="https", hostname="otcs.domain.tld", port="443", public_url="otcs.domain.tld", username="admin", password="********", base_path="/cs/llisapi.dll", ) otcs_object.authenticate() nodes = otcs_object.get_subnodes(2000) for node in nodes["results"]: print(node["data"]["properties"]["id"], node["data"]["properties"]["name"]) ``` ## Quick start - Customizer usage - Create an `.env` file as described here: [sample-environment-variables](customizerapisettings/#sample-environment-variables) - Create an payload file to define what the customizer should do, as described here [payload-syntax](payload-syntax) ```bash pip install pyxecm[customizer] pyxecm-customizer PAYLOAD.tfvars/PAYLOAD.yaml ``` ## Quick start - API - Install pyxecm with api and customizer dependencies - Launch the Rest API server - Access the Customizer API at [http://localhost:8000/api](http://localhost:8000/api) ```bash pip install pyxecm[api,customizer] pyxecm-api ``` ## Disclaimer Copyright © 2025 Open Text Corporation, All Rights Reserved. The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
text/markdown
null
Kai Gatzweiler <kgatzweiler@opentext.com>, "Dr. Marc Diefenbruch" <mdiefenb@opentext.com>
null
null
null
appworks, archivecenter, contentserver, extendedecm, opentext, otds
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: Apache Software License", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Topic :: Internet :: WWW/HTTP :: Dynamic Content :: Content Management System" ]
[]
null
null
>=3.10
[]
[]
[]
[ "lxml>=6.0.0", "opentelemetry-api>=1.34.1", "opentelemetry-exporter-otlp>=1.34.1", "opentelemetry-instrumentation-requests>=0.55b1", "opentelemetry-instrumentation-threading>=0.55b1", "opentelemetry-sdk>=1.34.1", "pandas>=2.3.1", "requests-toolbelt>=1.0.0", "requests>=2.32.4", "suds>=1.2.0", "we...
[]
[]
[]
[ "Homepage, https://github.com/opentext/pyxecm" ]
uv/0.9.25 {"installer":{"name":"uv","version":"0.9.25","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"12","id":"bookworm","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
2026-01-16T04:31:16.216426
pyxecm-3.1.9-py3-none-any.whl
672,370
e9/4a/93349526f921bd849b9625a817bd253c189d72fb24fb8201ef1264806a8f/pyxecm-3.1.9-py3-none-any.whl
py3
bdist_wheel
null
false
9429bd26d1c22877cccfe28e55394ff2
74713b3bf2dd43afbc88e6111057b530fa7d54203019ebd562da50a9bf130718
e94a93349526f921bd849b9625a817bd253c189d72fb24fb8201ef1264806a8f
null
[]
2.1
bids-validator-deno
2.2.10
Typescript implementation of the BIDS validator
[![Deno build](https://github.com/bids-standard/bids-validator/actions/workflows/deno_tests.yml/badge.svg)](https://github.com/bids-standard/bids-validator/actions/workflows/deno_tests.yml) [![Web validator](https://github.com/bids-standard/bids-validator/actions/workflows/web_build.yml/badge.svg)](https://github.com/bids-standard/bids-validator/actions/workflows/web_build.yml) [![Documentation Status](https://readthedocs.org/projects/bids-validator/badge/?version=latest)](https://bids-validator.readthedocs.io/en/latest/?badge=latest) [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3688707.svg)](https://doi.org/10.5281/zenodo.3688707) # The BIDS Validator The BIDS Validator is a web application, command-line utility, and Javascript/Typescript library for assessing compliance with the [Brain Imaging Data Structure (BIDS)][BIDS] standard. ## Getting Started In most cases, the simplest way to use the validator is to browse to the [BIDS Validator][] web page: ![The web interface to the BIDS Validator with the "Select Dataset Files" button highlighted. (Dark theme)](docs/_static/web_entrypoint_dark.png#gh-dark-mode-only) ![The web interface to the BIDS Validator with the "Select Dataset Files" button highlighted. (Light theme)](docs/_static/web_entrypoint_light.png#gh-light-mode-only) The web validator runs in-browser, and does not transfer data to any remote server. In some contexts, such as when working on a remote server, it may be easier to use the command-line. The BIDS Validator can be run with the [Deno][] runtime (see [Deno - Installation][] for detailed installation instructions): ```shell deno run -ERWN jsr:@bids/validator ``` Deno by default sandboxes applications like a web browser. `-E`, `-R`, `-W`, and `-N` allow the validator to read environment variables, read/write local files, and read network locations. A pre-compiled binary is published to [PyPI][] and may be installed with: ``` pip install bids-validator-deno bids-validator-deno --help ``` ### Configuration file The schema validator accepts a JSON configuration file that reclassifies issues as warnings, errors or ignored. ```json { "ignore": [ { "code": "JSON_KEY_RECOMMENDED", "location": "/T1w.json" } ], "warning": [], "error": [ { "code": "NO_AUTHORS" } ] } ``` The issues are partial matches of the `issues` that the validator accumulates. Pass the `--json` flag to see the issues in detail. ### Development tools From the repository root, use `./local-run` to run with all permissions enabled by default: ```shell # Run from within the /bids-validator directory cd bids-validator # Run validator: ./local-run path/to/dataset ``` ## Schema validator test suite ```shell # Run tests: deno test --allow-env --allow-read --allow-write src/ ``` This test suite includes running expected output from bids-examples and may throw some expected failures for bids-examples datasets where either the schema or validator are misaligned with the example dataset while under development. ## Modifying and building a new schema To modify the schema a clone of bids-standard/bids-specification will need to be made. README and schema itself live here https://github.com/bids-standard/bids-specification/tree/master/src/schema. After changes to the schema have been made to a local copy the dereferenced single json file used by the validator will need to be built. The `bidsschematools` python package does this. It can be installed from pypi via pip or a local installation can be made. It lives in the specification repository here https://github.com/bids-standard/bids-specification/tree/master/tools/schemacode The command to compile a dereferenced schema is `bst -v export --schema src/schema --output src/schema.json` (this assumes you are in the root of the bids-specification repo). Once compiled it can be passed to the validator via the `-s` flag, `./bids-validator-deno -s <path to schema> <path to dataset>` ## Documentation The BIDS validator documentation is available on [Read the Docs](https://bids-validator.readthedocs.io/en/latest/). [BIDS]: https://bids.neuroimaging.io [BIDS Validator]: https://bids-standard.github.io/bids-validator/ [Deno]: https://deno.com/ [Deno - Installation]: https://docs.deno.com/runtime/getting_started/installation/ [PyPI]: https://pypi.org/project/bids-validator-deno/
text/markdown
bids-standard developers
null
null
null
MIT
BIDS, BIDS validator
[ "Development Status :: 5 - Production/Stable", "Intended Audience :: Science/Research", "Topic :: Scientific/Engineering :: Bio-Informatics", "License :: OSI Approved :: MIT License", "Programming Language :: JavaScript" ]
[]
null
null
null
[]
[]
[]
[]
[]
[]
[]
[ "Documentation, https://bids-validator.readthedocs.io/", "Source code, https://github.com/bids-standard/bids-validator", "Issues, https://github.com/bids-standard/bids-validator/issues" ]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:31:58.787218
bids_validator_deno-2.2.10-py2.py3-none-macosx_11_0_arm64.whl
40,616,636
ad/d2/2503076540f6478ce9ad0e26c969257fade264c1ae1a3be39d773c49dcf8/bids_validator_deno-2.2.10-py2.py3-none-macosx_11_0_arm64.whl
py2.py3
bdist_wheel
null
false
a9a3a598e72c641f42f0734b0f885353
18ca532909452f668cbc7642e5b0b6cfc7a43a5cbd8c66d965759f8c488049d6
add22503076540f6478ce9ad0e26c969257fade264c1ae1a3be39d773c49dcf8
null
[]
2.1
bids-validator-deno
2.2.10
Typescript implementation of the BIDS validator
[![Deno build](https://github.com/bids-standard/bids-validator/actions/workflows/deno_tests.yml/badge.svg)](https://github.com/bids-standard/bids-validator/actions/workflows/deno_tests.yml) [![Web validator](https://github.com/bids-standard/bids-validator/actions/workflows/web_build.yml/badge.svg)](https://github.com/bids-standard/bids-validator/actions/workflows/web_build.yml) [![Documentation Status](https://readthedocs.org/projects/bids-validator/badge/?version=latest)](https://bids-validator.readthedocs.io/en/latest/?badge=latest) [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3688707.svg)](https://doi.org/10.5281/zenodo.3688707) # The BIDS Validator The BIDS Validator is a web application, command-line utility, and Javascript/Typescript library for assessing compliance with the [Brain Imaging Data Structure (BIDS)][BIDS] standard. ## Getting Started In most cases, the simplest way to use the validator is to browse to the [BIDS Validator][] web page: ![The web interface to the BIDS Validator with the "Select Dataset Files" button highlighted. (Dark theme)](docs/_static/web_entrypoint_dark.png#gh-dark-mode-only) ![The web interface to the BIDS Validator with the "Select Dataset Files" button highlighted. (Light theme)](docs/_static/web_entrypoint_light.png#gh-light-mode-only) The web validator runs in-browser, and does not transfer data to any remote server. In some contexts, such as when working on a remote server, it may be easier to use the command-line. The BIDS Validator can be run with the [Deno][] runtime (see [Deno - Installation][] for detailed installation instructions): ```shell deno run -ERWN jsr:@bids/validator ``` Deno by default sandboxes applications like a web browser. `-E`, `-R`, `-W`, and `-N` allow the validator to read environment variables, read/write local files, and read network locations. A pre-compiled binary is published to [PyPI][] and may be installed with: ``` pip install bids-validator-deno bids-validator-deno --help ``` ### Configuration file The schema validator accepts a JSON configuration file that reclassifies issues as warnings, errors or ignored. ```json { "ignore": [ { "code": "JSON_KEY_RECOMMENDED", "location": "/T1w.json" } ], "warning": [], "error": [ { "code": "NO_AUTHORS" } ] } ``` The issues are partial matches of the `issues` that the validator accumulates. Pass the `--json` flag to see the issues in detail. ### Development tools From the repository root, use `./local-run` to run with all permissions enabled by default: ```shell # Run from within the /bids-validator directory cd bids-validator # Run validator: ./local-run path/to/dataset ``` ## Schema validator test suite ```shell # Run tests: deno test --allow-env --allow-read --allow-write src/ ``` This test suite includes running expected output from bids-examples and may throw some expected failures for bids-examples datasets where either the schema or validator are misaligned with the example dataset while under development. ## Modifying and building a new schema To modify the schema a clone of bids-standard/bids-specification will need to be made. README and schema itself live here https://github.com/bids-standard/bids-specification/tree/master/src/schema. After changes to the schema have been made to a local copy the dereferenced single json file used by the validator will need to be built. The `bidsschematools` python package does this. It can be installed from pypi via pip or a local installation can be made. It lives in the specification repository here https://github.com/bids-standard/bids-specification/tree/master/tools/schemacode The command to compile a dereferenced schema is `bst -v export --schema src/schema --output src/schema.json` (this assumes you are in the root of the bids-specification repo). Once compiled it can be passed to the validator via the `-s` flag, `./bids-validator-deno -s <path to schema> <path to dataset>` ## Documentation The BIDS validator documentation is available on [Read the Docs](https://bids-validator.readthedocs.io/en/latest/). [BIDS]: https://bids.neuroimaging.io [BIDS Validator]: https://bids-standard.github.io/bids-validator/ [Deno]: https://deno.com/ [Deno - Installation]: https://docs.deno.com/runtime/getting_started/installation/ [PyPI]: https://pypi.org/project/bids-validator-deno/
text/markdown
bids-standard developers
null
null
null
MIT
BIDS, BIDS validator
[ "Development Status :: 5 - Production/Stable", "Intended Audience :: Science/Research", "Topic :: Scientific/Engineering :: Bio-Informatics", "License :: OSI Approved :: MIT License", "Programming Language :: JavaScript" ]
[]
null
null
null
[]
[]
[]
[]
[]
[]
[]
[ "Documentation, https://bids-validator.readthedocs.io/", "Source code, https://github.com/bids-standard/bids-validator", "Issues, https://github.com/bids-standard/bids-validator/issues" ]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:32:02.498566
bids_validator_deno-2.2.10-py2.py3-none-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl
43,003,197
a8/50/15c0a6d78ab1887a02f5ab2cc73e3de6b544973714e1728e8f86a3510029/bids_validator_deno-2.2.10-py2.py3-none-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl
py2.py3
bdist_wheel
null
false
4361a8fff6de90b47537657e88f96c4c
ef4baaf9a67011cb892fc8571e6dbf18a88ce0854c6ddca80718ee910aecffd7
a85015c0a6d78ab1887a02f5ab2cc73e3de6b544973714e1728e8f86a3510029
null
[]
2.1
bids-validator-deno
2.2.10
Typescript implementation of the BIDS validator
[![Deno build](https://github.com/bids-standard/bids-validator/actions/workflows/deno_tests.yml/badge.svg)](https://github.com/bids-standard/bids-validator/actions/workflows/deno_tests.yml) [![Web validator](https://github.com/bids-standard/bids-validator/actions/workflows/web_build.yml/badge.svg)](https://github.com/bids-standard/bids-validator/actions/workflows/web_build.yml) [![Documentation Status](https://readthedocs.org/projects/bids-validator/badge/?version=latest)](https://bids-validator.readthedocs.io/en/latest/?badge=latest) [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3688707.svg)](https://doi.org/10.5281/zenodo.3688707) # The BIDS Validator The BIDS Validator is a web application, command-line utility, and Javascript/Typescript library for assessing compliance with the [Brain Imaging Data Structure (BIDS)][BIDS] standard. ## Getting Started In most cases, the simplest way to use the validator is to browse to the [BIDS Validator][] web page: ![The web interface to the BIDS Validator with the "Select Dataset Files" button highlighted. (Dark theme)](docs/_static/web_entrypoint_dark.png#gh-dark-mode-only) ![The web interface to the BIDS Validator with the "Select Dataset Files" button highlighted. (Light theme)](docs/_static/web_entrypoint_light.png#gh-light-mode-only) The web validator runs in-browser, and does not transfer data to any remote server. In some contexts, such as when working on a remote server, it may be easier to use the command-line. The BIDS Validator can be run with the [Deno][] runtime (see [Deno - Installation][] for detailed installation instructions): ```shell deno run -ERWN jsr:@bids/validator ``` Deno by default sandboxes applications like a web browser. `-E`, `-R`, `-W`, and `-N` allow the validator to read environment variables, read/write local files, and read network locations. A pre-compiled binary is published to [PyPI][] and may be installed with: ``` pip install bids-validator-deno bids-validator-deno --help ``` ### Configuration file The schema validator accepts a JSON configuration file that reclassifies issues as warnings, errors or ignored. ```json { "ignore": [ { "code": "JSON_KEY_RECOMMENDED", "location": "/T1w.json" } ], "warning": [], "error": [ { "code": "NO_AUTHORS" } ] } ``` The issues are partial matches of the `issues` that the validator accumulates. Pass the `--json` flag to see the issues in detail. ### Development tools From the repository root, use `./local-run` to run with all permissions enabled by default: ```shell # Run from within the /bids-validator directory cd bids-validator # Run validator: ./local-run path/to/dataset ``` ## Schema validator test suite ```shell # Run tests: deno test --allow-env --allow-read --allow-write src/ ``` This test suite includes running expected output from bids-examples and may throw some expected failures for bids-examples datasets where either the schema or validator are misaligned with the example dataset while under development. ## Modifying and building a new schema To modify the schema a clone of bids-standard/bids-specification will need to be made. README and schema itself live here https://github.com/bids-standard/bids-specification/tree/master/src/schema. After changes to the schema have been made to a local copy the dereferenced single json file used by the validator will need to be built. The `bidsschematools` python package does this. It can be installed from pypi via pip or a local installation can be made. It lives in the specification repository here https://github.com/bids-standard/bids-specification/tree/master/tools/schemacode The command to compile a dereferenced schema is `bst -v export --schema src/schema --output src/schema.json` (this assumes you are in the root of the bids-specification repo). Once compiled it can be passed to the validator via the `-s` flag, `./bids-validator-deno -s <path to schema> <path to dataset>` ## Documentation The BIDS validator documentation is available on [Read the Docs](https://bids-validator.readthedocs.io/en/latest/). [BIDS]: https://bids.neuroimaging.io [BIDS Validator]: https://bids-standard.github.io/bids-validator/ [Deno]: https://deno.com/ [Deno - Installation]: https://docs.deno.com/runtime/getting_started/installation/ [PyPI]: https://pypi.org/project/bids-validator-deno/
text/markdown
bids-standard developers
null
null
null
MIT
BIDS, BIDS validator
[ "Development Status :: 5 - Production/Stable", "Intended Audience :: Science/Research", "Topic :: Scientific/Engineering :: Bio-Informatics", "License :: OSI Approved :: MIT License", "Programming Language :: JavaScript" ]
[]
null
null
null
[]
[]
[]
[]
[]
[]
[]
[ "Documentation, https://bids-validator.readthedocs.io/", "Source code, https://github.com/bids-standard/bids-validator", "Issues, https://github.com/bids-standard/bids-validator/issues" ]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:32:06.344359
bids_validator_deno-2.2.10-py2.py3-none-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl
44,276,271
79/8e/e1a9aef82e0ba90b46022b7f2293ff8ffe950c1fc1df5e643e786985fc9f/bids_validator_deno-2.2.10-py2.py3-none-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl
py2.py3
bdist_wheel
null
false
10f8c6e2e61790d9eeafaa2efe516744
292e482281df52b0b80bf21b30fc0186689668b89166e676a5916c313ece452e
798ee1a9aef82e0ba90b46022b7f2293ff8ffe950c1fc1df5e643e786985fc9f
null
[]
2.1
bids-validator-deno
2.2.10
Typescript implementation of the BIDS validator
[![Deno build](https://github.com/bids-standard/bids-validator/actions/workflows/deno_tests.yml/badge.svg)](https://github.com/bids-standard/bids-validator/actions/workflows/deno_tests.yml) [![Web validator](https://github.com/bids-standard/bids-validator/actions/workflows/web_build.yml/badge.svg)](https://github.com/bids-standard/bids-validator/actions/workflows/web_build.yml) [![Documentation Status](https://readthedocs.org/projects/bids-validator/badge/?version=latest)](https://bids-validator.readthedocs.io/en/latest/?badge=latest) [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3688707.svg)](https://doi.org/10.5281/zenodo.3688707) # The BIDS Validator The BIDS Validator is a web application, command-line utility, and Javascript/Typescript library for assessing compliance with the [Brain Imaging Data Structure (BIDS)][BIDS] standard. ## Getting Started In most cases, the simplest way to use the validator is to browse to the [BIDS Validator][] web page: ![The web interface to the BIDS Validator with the "Select Dataset Files" button highlighted. (Dark theme)](docs/_static/web_entrypoint_dark.png#gh-dark-mode-only) ![The web interface to the BIDS Validator with the "Select Dataset Files" button highlighted. (Light theme)](docs/_static/web_entrypoint_light.png#gh-light-mode-only) The web validator runs in-browser, and does not transfer data to any remote server. In some contexts, such as when working on a remote server, it may be easier to use the command-line. The BIDS Validator can be run with the [Deno][] runtime (see [Deno - Installation][] for detailed installation instructions): ```shell deno run -ERWN jsr:@bids/validator ``` Deno by default sandboxes applications like a web browser. `-E`, `-R`, `-W`, and `-N` allow the validator to read environment variables, read/write local files, and read network locations. A pre-compiled binary is published to [PyPI][] and may be installed with: ``` pip install bids-validator-deno bids-validator-deno --help ``` ### Configuration file The schema validator accepts a JSON configuration file that reclassifies issues as warnings, errors or ignored. ```json { "ignore": [ { "code": "JSON_KEY_RECOMMENDED", "location": "/T1w.json" } ], "warning": [], "error": [ { "code": "NO_AUTHORS" } ] } ``` The issues are partial matches of the `issues` that the validator accumulates. Pass the `--json` flag to see the issues in detail. ### Development tools From the repository root, use `./local-run` to run with all permissions enabled by default: ```shell # Run from within the /bids-validator directory cd bids-validator # Run validator: ./local-run path/to/dataset ``` ## Schema validator test suite ```shell # Run tests: deno test --allow-env --allow-read --allow-write src/ ``` This test suite includes running expected output from bids-examples and may throw some expected failures for bids-examples datasets where either the schema or validator are misaligned with the example dataset while under development. ## Modifying and building a new schema To modify the schema a clone of bids-standard/bids-specification will need to be made. README and schema itself live here https://github.com/bids-standard/bids-specification/tree/master/src/schema. After changes to the schema have been made to a local copy the dereferenced single json file used by the validator will need to be built. The `bidsschematools` python package does this. It can be installed from pypi via pip or a local installation can be made. It lives in the specification repository here https://github.com/bids-standard/bids-specification/tree/master/tools/schemacode The command to compile a dereferenced schema is `bst -v export --schema src/schema --output src/schema.json` (this assumes you are in the root of the bids-specification repo). Once compiled it can be passed to the validator via the `-s` flag, `./bids-validator-deno -s <path to schema> <path to dataset>` ## Documentation The BIDS validator documentation is available on [Read the Docs](https://bids-validator.readthedocs.io/en/latest/). [BIDS]: https://bids.neuroimaging.io [BIDS Validator]: https://bids-standard.github.io/bids-validator/ [Deno]: https://deno.com/ [Deno - Installation]: https://docs.deno.com/runtime/getting_started/installation/ [PyPI]: https://pypi.org/project/bids-validator-deno/
text/markdown
bids-standard developers
null
null
null
MIT
BIDS, BIDS validator
[ "Development Status :: 5 - Production/Stable", "Intended Audience :: Science/Research", "Topic :: Scientific/Engineering :: Bio-Informatics", "License :: OSI Approved :: MIT License", "Programming Language :: JavaScript" ]
[]
null
null
null
[]
[]
[]
[]
[]
[]
[]
[ "Documentation, https://bids-validator.readthedocs.io/", "Source code, https://github.com/bids-standard/bids-validator", "Issues, https://github.com/bids-standard/bids-validator/issues" ]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:32:10.128680
bids_validator_deno-2.2.10-py2.py3-none-win_amd64.whl
43,508,910
be/50/0a0e59f491fdc3517c328c354928a1fbd218f34eeb7194ce2c641cd5aa7c/bids_validator_deno-2.2.10-py2.py3-none-win_amd64.whl
py2.py3
bdist_wheel
null
false
188a7a82e675b4374ab8b157af11a83c
e927fb4ed61520739c8445e3faebef081800fe8c88c4d4ffc6d124f12efa2ae8
be500a0e59f491fdc3517c328c354928a1fbd218f34eeb7194ce2c641cd5aa7c
null
[]
2.1
bids-validator-deno
2.2.10
Typescript implementation of the BIDS validator
[![Deno build](https://github.com/bids-standard/bids-validator/actions/workflows/deno_tests.yml/badge.svg)](https://github.com/bids-standard/bids-validator/actions/workflows/deno_tests.yml) [![Web validator](https://github.com/bids-standard/bids-validator/actions/workflows/web_build.yml/badge.svg)](https://github.com/bids-standard/bids-validator/actions/workflows/web_build.yml) [![Documentation Status](https://readthedocs.org/projects/bids-validator/badge/?version=latest)](https://bids-validator.readthedocs.io/en/latest/?badge=latest) [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3688707.svg)](https://doi.org/10.5281/zenodo.3688707) # The BIDS Validator The BIDS Validator is a web application, command-line utility, and Javascript/Typescript library for assessing compliance with the [Brain Imaging Data Structure (BIDS)][BIDS] standard. ## Getting Started In most cases, the simplest way to use the validator is to browse to the [BIDS Validator][] web page: ![The web interface to the BIDS Validator with the "Select Dataset Files" button highlighted. (Dark theme)](docs/_static/web_entrypoint_dark.png#gh-dark-mode-only) ![The web interface to the BIDS Validator with the "Select Dataset Files" button highlighted. (Light theme)](docs/_static/web_entrypoint_light.png#gh-light-mode-only) The web validator runs in-browser, and does not transfer data to any remote server. In some contexts, such as when working on a remote server, it may be easier to use the command-line. The BIDS Validator can be run with the [Deno][] runtime (see [Deno - Installation][] for detailed installation instructions): ```shell deno run -ERWN jsr:@bids/validator ``` Deno by default sandboxes applications like a web browser. `-E`, `-R`, `-W`, and `-N` allow the validator to read environment variables, read/write local files, and read network locations. A pre-compiled binary is published to [PyPI][] and may be installed with: ``` pip install bids-validator-deno bids-validator-deno --help ``` ### Configuration file The schema validator accepts a JSON configuration file that reclassifies issues as warnings, errors or ignored. ```json { "ignore": [ { "code": "JSON_KEY_RECOMMENDED", "location": "/T1w.json" } ], "warning": [], "error": [ { "code": "NO_AUTHORS" } ] } ``` The issues are partial matches of the `issues` that the validator accumulates. Pass the `--json` flag to see the issues in detail. ### Development tools From the repository root, use `./local-run` to run with all permissions enabled by default: ```shell # Run from within the /bids-validator directory cd bids-validator # Run validator: ./local-run path/to/dataset ``` ## Schema validator test suite ```shell # Run tests: deno test --allow-env --allow-read --allow-write src/ ``` This test suite includes running expected output from bids-examples and may throw some expected failures for bids-examples datasets where either the schema or validator are misaligned with the example dataset while under development. ## Modifying and building a new schema To modify the schema a clone of bids-standard/bids-specification will need to be made. README and schema itself live here https://github.com/bids-standard/bids-specification/tree/master/src/schema. After changes to the schema have been made to a local copy the dereferenced single json file used by the validator will need to be built. The `bidsschematools` python package does this. It can be installed from pypi via pip or a local installation can be made. It lives in the specification repository here https://github.com/bids-standard/bids-specification/tree/master/tools/schemacode The command to compile a dereferenced schema is `bst -v export --schema src/schema --output src/schema.json` (this assumes you are in the root of the bids-specification repo). Once compiled it can be passed to the validator via the `-s` flag, `./bids-validator-deno -s <path to schema> <path to dataset>` ## Documentation The BIDS validator documentation is available on [Read the Docs](https://bids-validator.readthedocs.io/en/latest/). [BIDS]: https://bids.neuroimaging.io [BIDS Validator]: https://bids-standard.github.io/bids-validator/ [Deno]: https://deno.com/ [Deno - Installation]: https://docs.deno.com/runtime/getting_started/installation/ [PyPI]: https://pypi.org/project/bids-validator-deno/
text/markdown
bids-standard developers
null
null
null
MIT
BIDS, BIDS validator
[ "Development Status :: 5 - Production/Stable", "Intended Audience :: Science/Research", "Topic :: Scientific/Engineering :: Bio-Informatics", "License :: OSI Approved :: MIT License", "Programming Language :: JavaScript" ]
[]
null
null
null
[]
[]
[]
[]
[]
[]
[]
[ "Documentation, https://bids-validator.readthedocs.io/", "Source code, https://github.com/bids-standard/bids-validator", "Issues, https://github.com/bids-standard/bids-validator/issues" ]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:32:12.907485
bids_validator_deno-2.2.10.tar.gz
82,126
87/73/e40cc4d1a95d719428337c0376f947ad600edf4a928cd406c0e8a8e7efa2/bids_validator_deno-2.2.10.tar.gz
source
sdist
null
false
31ab00b6919ade05b1f84703e06bfb57
7a421a794900c64b4f32c2b5a9680a0e190004605d92980eeb59360292e7acfc
8773e40cc4d1a95d719428337c0376f947ad600edf4a928cd406c0e8a8e7efa2
null
[]
2.4
vllm-sr
0.1.0b2.dev20260116043205
vLLM Semantic Router - Intelligent routing for Mixture-of-Models
# vLLM Semantic Router Intelligent Router for Mixture-of-Models (MoM). GitHub: https://github.com/vllm-project/semantic-router ## Quick Start ### Installation ```bash # Install from PyPI pip install vllm-sr # Or install from source (development) cd src/vllm-sr pip install -e . ``` ### Usage ```bash # Initialize vLLM Semantic Router Configuration vllm-sr init # Start the router (includes dashboard) vllm-sr serve # Open dashboard in browser vllm-sr dashboard # View logs vllm-sr logs router vllm-sr logs envoy vllm-sr logs dashboard # Check status vllm-sr status # Stop vllm-sr stop ``` ## Features - **Router**: Intelligent request routing based on intent classification - **Envoy Proxy**: High-performance proxy with ext_proc integration - **Dashboard**: Web UI for monitoring and testing (http://localhost:8700) - **Metrics**: Prometheus metrics endpoint (http://localhost:9190/metrics) ## Endpoints After running `vllm-sr serve`, the following endpoints are available: | Endpoint | Port | Description | |----------|------|-------------| | Dashboard | 8700 | Web UI for monitoring and Playground | | API | 8888* | Chat completions API (configurable in config.yaml) | | Metrics | 9190 | Prometheus metrics | | gRPC | 50051 | Router gRPC (internal) | *Default port, configurable via `listeners` in config.yaml ## Configuration ### File Descriptor Limits The CLI automatically sets file descriptor limits to 65,536 for Envoy proxy. To customize: ```bash export VLLM_SR_NOFILE_LIMIT=100000 # Optional (min: 8192) vllm-sr serve ``` ## License Apache 2.0
text/markdown
vLLM-SR Team
null
null
null
Apache-2.0
vllm, semantic-router, llm, routing, caching
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12" ]
[]
null
null
>=3.10
[]
[]
[]
[ "click>=8.1.7", "pyyaml>=6.0.2", "jinja2>=3.1.4", "requests>=2.31.0", "pydantic>=2.0.0", "huggingface_hub[cli]>=0.20.0", "pytest>=8.4.1; extra == \"dev\"", "black>=22.0.0; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/vllm-project/vllm-semantic-router", "Documentation, https://github.com/vllm-project/vllm-semantic-router/blob/main/README.md", "Repository, https://github.com/vllm-project/vllm-semantic-router", "Issues, https://github.com/vllm-project/vllm-semantic-router/issues" ]
twine/6.2.0 CPython/3.11.14
2026-01-16T04:32:16.194438
vllm_sr-0.1.0b2.dev20260116043205-py3-none-any.whl
44,105
f3/ca/3ece201c591caaf0b63e3de4ea8cdb6cf4ffb7b46a8a4109a6e20257ec6b/vllm_sr-0.1.0b2.dev20260116043205-py3-none-any.whl
py3
bdist_wheel
null
false
3b8adae3bd7414bd39c67c638bd54693
9b476d83b3d6e7740dd8c0b492b535226f91ccd0c568a0353becedaf95070728
f3ca3ece201c591caaf0b63e3de4ea8cdb6cf4ffb7b46a8a4109a6e20257ec6b
null
[]
2.4
vllm-sr
0.1.0b2.dev20260116043205
vLLM Semantic Router - Intelligent routing for Mixture-of-Models
# vLLM Semantic Router Intelligent Router for Mixture-of-Models (MoM). GitHub: https://github.com/vllm-project/semantic-router ## Quick Start ### Installation ```bash # Install from PyPI pip install vllm-sr # Or install from source (development) cd src/vllm-sr pip install -e . ``` ### Usage ```bash # Initialize vLLM Semantic Router Configuration vllm-sr init # Start the router (includes dashboard) vllm-sr serve # Open dashboard in browser vllm-sr dashboard # View logs vllm-sr logs router vllm-sr logs envoy vllm-sr logs dashboard # Check status vllm-sr status # Stop vllm-sr stop ``` ## Features - **Router**: Intelligent request routing based on intent classification - **Envoy Proxy**: High-performance proxy with ext_proc integration - **Dashboard**: Web UI for monitoring and testing (http://localhost:8700) - **Metrics**: Prometheus metrics endpoint (http://localhost:9190/metrics) ## Endpoints After running `vllm-sr serve`, the following endpoints are available: | Endpoint | Port | Description | |----------|------|-------------| | Dashboard | 8700 | Web UI for monitoring and Playground | | API | 8888* | Chat completions API (configurable in config.yaml) | | Metrics | 9190 | Prometheus metrics | | gRPC | 50051 | Router gRPC (internal) | *Default port, configurable via `listeners` in config.yaml ## Configuration ### File Descriptor Limits The CLI automatically sets file descriptor limits to 65,536 for Envoy proxy. To customize: ```bash export VLLM_SR_NOFILE_LIMIT=100000 # Optional (min: 8192) vllm-sr serve ``` ## License Apache 2.0
text/markdown
vLLM-SR Team
null
null
null
Apache-2.0
vllm, semantic-router, llm, routing, caching
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12" ]
[]
null
null
>=3.10
[]
[]
[]
[ "click>=8.1.7", "pyyaml>=6.0.2", "jinja2>=3.1.4", "requests>=2.31.0", "pydantic>=2.0.0", "huggingface_hub[cli]>=0.20.0", "pytest>=8.4.1; extra == \"dev\"", "black>=22.0.0; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/vllm-project/vllm-semantic-router", "Documentation, https://github.com/vllm-project/vllm-semantic-router/blob/main/README.md", "Repository, https://github.com/vllm-project/vllm-semantic-router", "Issues, https://github.com/vllm-project/vllm-semantic-router/issues" ]
twine/6.2.0 CPython/3.11.14
2026-01-16T04:32:17.479510
vllm_sr-0.1.0b2.dev20260116043205.tar.gz
35,796
01/50/1648f230f34c0271d92d666afe3eac764f26769145ca377fc4bc2e009301/vllm_sr-0.1.0b2.dev20260116043205.tar.gz
source
sdist
null
false
dba140eca8be2a253d824118a7e17bb3
8a367ec14b2b61803d54eca68ea3d7b360fe2f68ca61027eb0e5f7967fdefdf0
01501648f230f34c0271d92d666afe3eac764f26769145ca377fc4bc2e009301
null
[]
2.4
fast-scrape
0.1.0
High-performance HTML parsing library for Python
# scrape-rs (Python) [![PyPI](https://img.shields.io/pypi/v/scrape-rs)](https://pypi.org/project/scrape-rs) [![Python](https://img.shields.io/pypi/pyversions/scrape-rs)](https://pypi.org/project/scrape-rs) [![codecov](https://codecov.io/gh/bug-ops/scrape-rs/graph/badge.svg?token=6MQTONGT95&flag=python)](https://codecov.io/gh/bug-ops/scrape-rs) [![License](https://img.shields.io/pypi/l/scrape-rs)](../../LICENSE-MIT) Python bindings for scrape-rs, a high-performance HTML parsing library. ## Installation ```bash pip install scrape-rs ``` Alternative package managers: ```bash # uv (recommended - 10-100x faster) uv pip install scrape-rs # Poetry poetry add scrape-rs # Pipenv pipenv install scrape-rs ``` > [!IMPORTANT] > Requires Python 3.10 or later. ## Quick start ```python from scrape_rs import Soup html = "<html><body><div class='content'>Hello, World!</div></body></html>" soup = Soup(html) div = soup.find("div") print(div.text) # Hello, World! ``` ## Usage ### Find elements ```python from scrape_rs import Soup soup = Soup(html) # Find first element by tag div = soup.find("div") # Find all elements divs = soup.find_all("div") # CSS selectors for el in soup.select("div.content > p"): print(el.text) ``` ### Element properties ```python element = soup.find("a") # Get text content text = element.text # Get inner HTML html = element.inner_html # Get attribute href = element.get("href") ``` ### Batch processing ```python from scrape_rs import Soup # Process multiple documents in parallel documents = [html1, html2, html3] soups = Soup.parse_batch(documents) for soup in soups: print(soup.find("title").text) ``` > [!TIP] > Use `parse_batch()` for processing multiple documents. It uses all CPU cores automatically. ## Type hints This package includes type stubs for full IDE support: ```python from scrape_rs import Soup, Tag def extract_links(soup: Soup) -> list[str]: return [a.get("href") for a in soup.select("a[href]")] ``` ## Performance Compared to BeautifulSoup on the same HTML documents: | Operation | Speedup | |-----------|---------| | Parse (1 KB) | **9.7x** faster | | Parse (219 KB) | **9.2x** faster | | Parse (5.9 MB) | **10.6x** faster | | `find(".class")` | **132x** faster | | `select(".class")` | **40x** faster | > [!TIP] > Run `python benches/compare_python.py` from the project root to benchmark on your hardware. ## Related packages Part of the [scrape-rs](https://github.com/bug-ops/scrape-rs) project: - `scrape-core` — Rust core library - `scrape-rs` (npm) — Node.js bindings - `@scrape-rs/wasm` — Browser/WASM bindings ## License Licensed under either of [Apache License, Version 2.0](../../LICENSE-APACHE) or [MIT License](../../LICENSE-MIT) at your option.
text/markdown; charset=UTF-8; variant=GFM
null
null
null
null
MIT OR Apache-2.0
html, parser, scraping, css-selectors, dom
[ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "License :: OSI Approved :: Apache Software License", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Lang...
[]
null
null
>=3.10
[]
[]
[]
[]
[]
[]
[]
[ "Homepage, https://github.com/bug-ops/scrape-rs", "Documentation, https://github.com/bug-ops/scrape-rs", "Repository, https://github.com/bug-ops/scrape-rs", "Issues, https://github.com/bug-ops/scrape-rs/issues" ]
maturin/1.10.2
2026-01-16T04:32:24.753289
fast_scrape-0.1.0-cp314-cp314-macosx_11_0_arm64.whl
556,023
02/3b/d45861d6c5fde9387e310e1a9ef3efe363ba18df72b39840b36fdff459a8/fast_scrape-0.1.0-cp314-cp314-macosx_11_0_arm64.whl
cp314
bdist_wheel
null
false
eab3b1fa03ee53f0a45123a81281c024
fa081df9b38f9bfb287811ad73cd88e444c8e1d6f5b4ac6dcc30b65853bfd21e
023bd45861d6c5fde9387e310e1a9ef3efe363ba18df72b39840b36fdff459a8
null
[]
2.4
fast-scrape
0.1.0
High-performance HTML parsing library for Python
# scrape-rs (Python) [![PyPI](https://img.shields.io/pypi/v/scrape-rs)](https://pypi.org/project/scrape-rs) [![Python](https://img.shields.io/pypi/pyversions/scrape-rs)](https://pypi.org/project/scrape-rs) [![codecov](https://codecov.io/gh/bug-ops/scrape-rs/graph/badge.svg?token=6MQTONGT95&flag=python)](https://codecov.io/gh/bug-ops/scrape-rs) [![License](https://img.shields.io/pypi/l/scrape-rs)](../../LICENSE-MIT) Python bindings for scrape-rs, a high-performance HTML parsing library. ## Installation ```bash pip install scrape-rs ``` Alternative package managers: ```bash # uv (recommended - 10-100x faster) uv pip install scrape-rs # Poetry poetry add scrape-rs # Pipenv pipenv install scrape-rs ``` > [!IMPORTANT] > Requires Python 3.10 or later. ## Quick start ```python from scrape_rs import Soup html = "<html><body><div class='content'>Hello, World!</div></body></html>" soup = Soup(html) div = soup.find("div") print(div.text) # Hello, World! ``` ## Usage ### Find elements ```python from scrape_rs import Soup soup = Soup(html) # Find first element by tag div = soup.find("div") # Find all elements divs = soup.find_all("div") # CSS selectors for el in soup.select("div.content > p"): print(el.text) ``` ### Element properties ```python element = soup.find("a") # Get text content text = element.text # Get inner HTML html = element.inner_html # Get attribute href = element.get("href") ``` ### Batch processing ```python from scrape_rs import Soup # Process multiple documents in parallel documents = [html1, html2, html3] soups = Soup.parse_batch(documents) for soup in soups: print(soup.find("title").text) ``` > [!TIP] > Use `parse_batch()` for processing multiple documents. It uses all CPU cores automatically. ## Type hints This package includes type stubs for full IDE support: ```python from scrape_rs import Soup, Tag def extract_links(soup: Soup) -> list[str]: return [a.get("href") for a in soup.select("a[href]")] ``` ## Performance Compared to BeautifulSoup on the same HTML documents: | Operation | Speedup | |-----------|---------| | Parse (1 KB) | **9.7x** faster | | Parse (219 KB) | **9.2x** faster | | Parse (5.9 MB) | **10.6x** faster | | `find(".class")` | **132x** faster | | `select(".class")` | **40x** faster | > [!TIP] > Run `python benches/compare_python.py` from the project root to benchmark on your hardware. ## Related packages Part of the [scrape-rs](https://github.com/bug-ops/scrape-rs) project: - `scrape-core` — Rust core library - `scrape-rs` (npm) — Node.js bindings - `@scrape-rs/wasm` — Browser/WASM bindings ## License Licensed under either of [Apache License, Version 2.0](../../LICENSE-APACHE) or [MIT License](../../LICENSE-MIT) at your option.
text/markdown; charset=UTF-8; variant=GFM
null
null
null
null
MIT OR Apache-2.0
html, parser, scraping, css-selectors, dom
[ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "License :: OSI Approved :: Apache Software License", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Lang...
[]
null
null
>=3.10
[]
[]
[]
[]
[]
[]
[]
[ "Homepage, https://github.com/bug-ops/scrape-rs", "Documentation, https://github.com/bug-ops/scrape-rs", "Repository, https://github.com/bug-ops/scrape-rs", "Issues, https://github.com/bug-ops/scrape-rs/issues" ]
maturin/1.10.2
2026-01-16T04:32:27.317323
fast_scrape-0.1.0.tar.gz
73,462
47/5a/7c7b42c8b79060b175bd282e1c71a979b1263328404f70004a6fca5c0f5c/fast_scrape-0.1.0.tar.gz
source
sdist
null
false
b7d9afbed7b46add9e0394e5ee053d2a
f6f61b700b400f38c5e720745450eb529f5e6ccae8e919a82b7b73b4360e5773
475a7c7b42c8b79060b175bd282e1c71a979b1263328404f70004a6fca5c0f5c
null
[]
2.4
hubify-dataset
0.1.1
Convert object detection datasets (COCO, YOLO, Pascal VOC, etc.) to HuggingFace format
# Hubify ![Test & Lint](https://github.com/benjamintli/coco2hf/workflows/Test%20%26%20Lint/badge.svg) ![CLI Smoke Test](https://github.com/benjamintli/coco2hf/workflows/CLI%20Smoke%20Test/badge.svg) Convert object detection datasets to HuggingFace format and upload to the Hub. **Currently supported formats:** - COCO format annotations - YOLO format annotations - YOLO OBB format annotations **Coming soon:** Pascal VOC, Labelme, and more! ## Motivations for this tool HuggingFace has become the defacto *open source* community to upload datasets and models. It's primarily about LLMs and language models, but there's nothing about HuggingFace's dataset hosting that's specific to language modeling. This tool is meant to be a way to consolidate the different formats from the object detection domain (COCO, Pascal VOC, etc) into what HuggingFace suggests for their Image Datasets, and upload it to HuggingFace Hub. ## Installation ```bash # Install with uv (recommended) uv pip install -e . # Or with pip pip install -e . ``` ## Usage After installation, you can use the `hubify` command: ```bash # Auto-detect annotations in train/validation/test directories hubify --data-dir /path/to/images --format coco # Manually specify annotation files hubify --data-dir /path/to/images \ --train-annotations /path/to/instances_train2017.json \ --validation-annotations /path/to/instances_val2017.json # Generate sample visualizations hubify --data-dir /path/to/images --visualize # Push to HuggingFace Hub hubify --data-dir /path/to/images \ --train-annotations /path/to/instances_train2017.json \ --push-to-hub username/my-dataset ``` Or for yolo: ``` hubify --data-dir ~/Downloads/DOTAv1.5 --format yolo-obb --push-to-hub benjamintli/dota-v1.5 hubify --data-dir ~/Downloads/DOTAv1.5 --format yolo --push-to-hub benjamintli/dota-v1.5 ``` Or run directly with Python (from the virtual environment): ```bash source .venv/bin/activate python -m src.main --data-dir /path/to/images ``` ## Expected Directory Structure * For coco: ``` data-dir/ ├── train/ │ ├── instances*.json (auto-detected) │ └── *.jpg (images) ├── validation/ │ ├── instances*.json (auto-detected) │ └── *.jpg (images) └── test/ (optional) ├── instances*.json └── *.jpg ``` ## Output The tool generates `metadata.jsonl` files in each split directory: ``` data-dir/ ├── train/ │ └── metadata.jsonl └── validation/ └── metadata.jsonl ``` Each line in `metadata.jsonl` contains: ```json { "file_name": "image.jpg", "objects": { "bbox": [[x, y, width, height], ...], "category": [0, 1, ...] } } ``` ## Options - `--data-dir`: Root directory containing train/validation/test subdirectories (required) - `--train-annotations`: Path to training annotations JSON (optional) - `--validation-annotations`: Path to validation annotations JSON (optional) - `--test-annotations`: Path to test annotations JSON (optional) - `--visualize`: Generate sample visualization images with bounding boxes - `--push-to-hub`: Push dataset to HuggingFace Hub (format: `username/dataset-name`) - `--token`: HuggingFace API token (optional, defaults to `HF_TOKEN` env var or `huggingface-cli login`) ### Authentication for Hub Push When using `--push-to-hub`, the tool looks for your HuggingFace token in this order: 1. `--token YOUR_TOKEN` (CLI argument) 2. `HF_TOKEN` environment variable 3. Token from `huggingface-cli login` If no token is found, you'll get a helpful error message with instructions.
text/markdown
null
null
null
null
null
null
[]
[]
null
null
>=3.12
[]
[]
[]
[ "datasets>=4.4.2", "huggingface-hub>=1.2.3", "pillow>=12.1.0", "pyyaml>=6.0", "rich>=13.9.4", "ruff>=0.14.10", "ruff>=0.14.10; extra == \"dev\"" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:34:03.903980
hubify_dataset-0.1.1-py3-none-any.whl
17,432
13/d2/6e5bbac9d21d7f8a1fbbe11b7115c1c4a962806a07c559a930f07f150050/hubify_dataset-0.1.1-py3-none-any.whl
py3
bdist_wheel
null
false
c8300a435a7d67cc258bda7ba850e2f0
75b9f6ba70f19fed27dceb92191748c9ea682197d05cb2d733edbe2952aeda35
13d26e5bbac9d21d7f8a1fbbe11b7115c1c4a962806a07c559a930f07f150050
null
[ "LICENSE" ]
2.4
hubify-dataset
0.1.1
Convert object detection datasets (COCO, YOLO, Pascal VOC, etc.) to HuggingFace format
# Hubify ![Test & Lint](https://github.com/benjamintli/coco2hf/workflows/Test%20%26%20Lint/badge.svg) ![CLI Smoke Test](https://github.com/benjamintli/coco2hf/workflows/CLI%20Smoke%20Test/badge.svg) Convert object detection datasets to HuggingFace format and upload to the Hub. **Currently supported formats:** - COCO format annotations - YOLO format annotations - YOLO OBB format annotations **Coming soon:** Pascal VOC, Labelme, and more! ## Motivations for this tool HuggingFace has become the defacto *open source* community to upload datasets and models. It's primarily about LLMs and language models, but there's nothing about HuggingFace's dataset hosting that's specific to language modeling. This tool is meant to be a way to consolidate the different formats from the object detection domain (COCO, Pascal VOC, etc) into what HuggingFace suggests for their Image Datasets, and upload it to HuggingFace Hub. ## Installation ```bash # Install with uv (recommended) uv pip install -e . # Or with pip pip install -e . ``` ## Usage After installation, you can use the `hubify` command: ```bash # Auto-detect annotations in train/validation/test directories hubify --data-dir /path/to/images --format coco # Manually specify annotation files hubify --data-dir /path/to/images \ --train-annotations /path/to/instances_train2017.json \ --validation-annotations /path/to/instances_val2017.json # Generate sample visualizations hubify --data-dir /path/to/images --visualize # Push to HuggingFace Hub hubify --data-dir /path/to/images \ --train-annotations /path/to/instances_train2017.json \ --push-to-hub username/my-dataset ``` Or for yolo: ``` hubify --data-dir ~/Downloads/DOTAv1.5 --format yolo-obb --push-to-hub benjamintli/dota-v1.5 hubify --data-dir ~/Downloads/DOTAv1.5 --format yolo --push-to-hub benjamintli/dota-v1.5 ``` Or run directly with Python (from the virtual environment): ```bash source .venv/bin/activate python -m src.main --data-dir /path/to/images ``` ## Expected Directory Structure * For coco: ``` data-dir/ ├── train/ │ ├── instances*.json (auto-detected) │ └── *.jpg (images) ├── validation/ │ ├── instances*.json (auto-detected) │ └── *.jpg (images) └── test/ (optional) ├── instances*.json └── *.jpg ``` ## Output The tool generates `metadata.jsonl` files in each split directory: ``` data-dir/ ├── train/ │ └── metadata.jsonl └── validation/ └── metadata.jsonl ``` Each line in `metadata.jsonl` contains: ```json { "file_name": "image.jpg", "objects": { "bbox": [[x, y, width, height], ...], "category": [0, 1, ...] } } ``` ## Options - `--data-dir`: Root directory containing train/validation/test subdirectories (required) - `--train-annotations`: Path to training annotations JSON (optional) - `--validation-annotations`: Path to validation annotations JSON (optional) - `--test-annotations`: Path to test annotations JSON (optional) - `--visualize`: Generate sample visualization images with bounding boxes - `--push-to-hub`: Push dataset to HuggingFace Hub (format: `username/dataset-name`) - `--token`: HuggingFace API token (optional, defaults to `HF_TOKEN` env var or `huggingface-cli login`) ### Authentication for Hub Push When using `--push-to-hub`, the tool looks for your HuggingFace token in this order: 1. `--token YOUR_TOKEN` (CLI argument) 2. `HF_TOKEN` environment variable 3. Token from `huggingface-cli login` If no token is found, you'll get a helpful error message with instructions.
text/markdown
null
null
null
null
null
null
[]
[]
null
null
>=3.12
[]
[]
[]
[ "datasets>=4.4.2", "huggingface-hub>=1.2.3", "pillow>=12.1.0", "pyyaml>=6.0", "rich>=13.9.4", "ruff>=0.14.10", "ruff>=0.14.10; extra == \"dev\"" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:34:05.168221
hubify_dataset-0.1.1.tar.gz
15,886
94/aa/93c82a0c85bdb7c6e60ede814d9ff5b24531899d57f09c776ba6493cef87/hubify_dataset-0.1.1.tar.gz
source
sdist
null
false
1e8a3adba8df609a2759e9505d4a93e4
96e6d82c954ee1b36191c7220ddc2b9c44151511ebeb01ecc8e6d22704634e3e
94aa93c82a0c85bdb7c6e60ede814d9ff5b24531899d57f09c776ba6493cef87
null
[ "LICENSE" ]
2.4
revidx
1.3.0
reVidx: Re-encode video to AVC/MP3 for legacy compatibility.
# reVidx ![FFmpeg](https://img.shields.io/badge/ffmpeg-black?logo=FFmpeg&logoColor=green) reVidx is a cross-platform CLI tool designed to re-encode video files to AVC (H.264) codec and audio to MP3 for compatibility with legacy devices and players. Built on top of FFmpeg, it offers a simple, fast interface for batch processing video conversion, subtitle burning, and audio extraction (aac) with a minimal terminal progress bar. ## Requirements - [Python](https://www.python.org/): 3.7 or higher. - [FFmpeg](https://ffmpeg.org/) and [FFprobe](https://ffmpeg.org/ffprobe.html): Should available in your system PATH. ## Installation from source - Install directly from github repo ```bash git clone https://github.com/voidsnax/reVidx.git cd revidx pip install -e . ``` ## Features - **Video Conversion**: Converts `HEVC` (H.265) to `AVC` (H.264) with proper pixel format and colour range settings for older device compatibility. - **Audio Conversion**: Encodes audio to `mp3` (192kbps) for video files. - **Audio Extraction**: Extracts audio directly to `aac` (128kbps). - **Subtitle Burning**: Hardcode subtitles into the video track. - **Progress Bar**: Displays real-time stats including `Size`, `Duration`, `Percentage`, and `Elapsed Time`. - **Fast Encoding**: Uses `crf` 20 (default) with `veryfast` preset for a balance of speed and quality. - **Batch Processing**: Process multiple video files sequentially with a single command. ## Usage ```txt usage: revidx INPUTFILES ... [OPTIONS] positional arguments: inputfiles Input video files options: -h, --help show this help message and exit -o [PATH/NAME] Output directory or filename -skip Skip video encoding (copy) -burn [INDEX/PATH] Burn subtitles into the video (default: first subtitle stream from input) -aindex INDEX Audio index (default: 0) -audio Extract only audio to AAC -crf VALUE CRF value (default: 20) ``` ### Basic Conversion Convert a video file to `avc/mp3`. The output will be saved as `input-AvcMp3.mp4`. ```bash revidx inputvideo ``` ### Skip Video Encoding Convert the audio to `mp3` and Keep the original video stream. ```bash revidx inputvideo -skip ``` ### Specify Output Path Convert a videofile and save it with specific name or convert multiple files and save them to specific folder. ```bash revidx inputvideo -o ~/Covertedvideo.mp4 ``` ```bash revidx inputvideo -o ./ConvertedVideos ``` ### Burn Subtitles Burn the first subtitle track into the video. ```bash revidx inputvideo -burn ``` Burn a specific subtitle stream from input video. ```bash revidx inputvideo -burn 2 ``` Burn a specific external subtitle file. ```bash revidx inputvideo -burn pathtosubfile ``` ### Extract Audio Only Extract the audio track to an `aac` file. ```bash revidx inputvideo -audio ``` ## Notes - **Output Extensions**: When using `-o` to specify a filename, ensure it ends with `.mp4` or .`mkv` (video) or `.aac` (audio). - **Automatic Naming**: If no name is provided, output is saved as `input-AvcMp3.mp4` or `input.aac`. - **Overwriting**: The tool automatically overwrites existing files without prompting. - **Subtitles**: Subtitle index start from `0` and ensure proper subtitle formats are passed when `-burn` is provided with external path. - **Interrupting**: Press `Ctrl+C` to abort the current encoding. ## License MIT License
text/markdown
voidsnax
null
null
null
MIT
video, ffmpeg, converter, h264, mp3
[ "Programming Language :: Python :: 3", "Operating System :: OS Independent" ]
[]
null
null
>=3.7
[]
[]
[]
[ "colorama" ]
[]
[]
[]
[ "Homepage, https://github.com/voidsnax/reVidx" ]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:34:40.473011
revidx-1.3.0-py3-none-any.whl
8,940
00/85/feda0519e2806041ddaf5a4870f9547645b3401ebf7349be31ec08ea015b/revidx-1.3.0-py3-none-any.whl
py3
bdist_wheel
null
false
63845ae0bdcca51043d08e0be97e2962
389f485aba4ea7a4cb1b1a6a03ec34578f1b6bb2daf1cb11a620589443fcf867
0085feda0519e2806041ddaf5a4870f9547645b3401ebf7349be31ec08ea015b
null
[ "LICENSE" ]
2.4
revidx
1.3.0
reVidx: Re-encode video to AVC/MP3 for legacy compatibility.
# reVidx ![FFmpeg](https://img.shields.io/badge/ffmpeg-black?logo=FFmpeg&logoColor=green) reVidx is a cross-platform CLI tool designed to re-encode video files to AVC (H.264) codec and audio to MP3 for compatibility with legacy devices and players. Built on top of FFmpeg, it offers a simple, fast interface for batch processing video conversion, subtitle burning, and audio extraction (aac) with a minimal terminal progress bar. ## Requirements - [Python](https://www.python.org/): 3.7 or higher. - [FFmpeg](https://ffmpeg.org/) and [FFprobe](https://ffmpeg.org/ffprobe.html): Should available in your system PATH. ## Installation from source - Install directly from github repo ```bash git clone https://github.com/voidsnax/reVidx.git cd revidx pip install -e . ``` ## Features - **Video Conversion**: Converts `HEVC` (H.265) to `AVC` (H.264) with proper pixel format and colour range settings for older device compatibility. - **Audio Conversion**: Encodes audio to `mp3` (192kbps) for video files. - **Audio Extraction**: Extracts audio directly to `aac` (128kbps). - **Subtitle Burning**: Hardcode subtitles into the video track. - **Progress Bar**: Displays real-time stats including `Size`, `Duration`, `Percentage`, and `Elapsed Time`. - **Fast Encoding**: Uses `crf` 20 (default) with `veryfast` preset for a balance of speed and quality. - **Batch Processing**: Process multiple video files sequentially with a single command. ## Usage ```txt usage: revidx INPUTFILES ... [OPTIONS] positional arguments: inputfiles Input video files options: -h, --help show this help message and exit -o [PATH/NAME] Output directory or filename -skip Skip video encoding (copy) -burn [INDEX/PATH] Burn subtitles into the video (default: first subtitle stream from input) -aindex INDEX Audio index (default: 0) -audio Extract only audio to AAC -crf VALUE CRF value (default: 20) ``` ### Basic Conversion Convert a video file to `avc/mp3`. The output will be saved as `input-AvcMp3.mp4`. ```bash revidx inputvideo ``` ### Skip Video Encoding Convert the audio to `mp3` and Keep the original video stream. ```bash revidx inputvideo -skip ``` ### Specify Output Path Convert a videofile and save it with specific name or convert multiple files and save them to specific folder. ```bash revidx inputvideo -o ~/Covertedvideo.mp4 ``` ```bash revidx inputvideo -o ./ConvertedVideos ``` ### Burn Subtitles Burn the first subtitle track into the video. ```bash revidx inputvideo -burn ``` Burn a specific subtitle stream from input video. ```bash revidx inputvideo -burn 2 ``` Burn a specific external subtitle file. ```bash revidx inputvideo -burn pathtosubfile ``` ### Extract Audio Only Extract the audio track to an `aac` file. ```bash revidx inputvideo -audio ``` ## Notes - **Output Extensions**: When using `-o` to specify a filename, ensure it ends with `.mp4` or .`mkv` (video) or `.aac` (audio). - **Automatic Naming**: If no name is provided, output is saved as `input-AvcMp3.mp4` or `input.aac`. - **Overwriting**: The tool automatically overwrites existing files without prompting. - **Subtitles**: Subtitle index start from `0` and ensure proper subtitle formats are passed when `-burn` is provided with external path. - **Interrupting**: Press `Ctrl+C` to abort the current encoding. ## License MIT License
text/markdown
voidsnax
null
null
null
MIT
video, ffmpeg, converter, h264, mp3
[ "Programming Language :: Python :: 3", "Operating System :: OS Independent" ]
[]
null
null
>=3.7
[]
[]
[]
[ "colorama" ]
[]
[]
[]
[ "Homepage, https://github.com/voidsnax/reVidx" ]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:34:41.839542
revidx-1.3.0.tar.gz
9,570
80/a9/e8c934bc08352f811ac44b90244f35ed68bbc0e70a3880c703a619413947/revidx-1.3.0.tar.gz
source
sdist
null
false
c816625d0f09ff4f9a3fd2f9085bc0b5
33bc7dae814d00f75ee8de50d3e17ae77fe4dc98f12a9ae0670274e81a597082
80a9e8c934bc08352f811ac44b90244f35ed68bbc0e70a3880c703a619413947
null
[ "LICENSE" ]
2.2
ai-atlasforge
1.2.0
Autonomous AI research and development platform powered by Claude
# AI-AtlasForge An autonomous AI research and development platform powered by Claude. Run long-duration missions, accumulate cross-session knowledge, and build software autonomously. ## What is AI-AtlasForge? AI-AtlasForge is not a chatbot wrapper. It's an **autonomous research engine** that: - Runs multi-day missions without human intervention - Maintains mission continuity across context windows - Accumulates knowledge that persists across sessions - Self-corrects when drifting from objectives - Adversarially tests its own outputs ## Quick Start ### Prerequisites - Python 3.10+ - Anthropic API key (get one at https://console.anthropic.com/) - Linux environment (tested on Ubuntu 22.04+, Debian 12+) > **Platform Notes:** > - **Windows:** Use WSL2 (Windows Subsystem for Linux) > - **macOS:** Should work but is untested. Please report issues. ### Option 1: Standard Installation ```bash # Clone the repository git clone https://github.com/DragonShadows1978/AI-AtlasForge.git cd AI-AtlasForge # Run the installer ./install.sh # Configure your API key export ANTHROPIC_API_KEY='your-key-here' # Or edit config.yaml / .env # Verify installation ./verify.sh ``` ### Option 2: One-Liner Install ```bash curl -sSL https://raw.githubusercontent.com/DragonShadows1978/AI-AtlasForge/main/quick_install.sh | bash ``` ### Option 3: Docker Installation ```bash git clone https://github.com/DragonShadows1978/AI-AtlasForge.git cd AI-AtlasForge docker compose up -d # Dashboard at http://localhost:5050 ``` For detailed installation options, see [INSTALL.md](INSTALL.md) or [QUICKSTART.md](QUICKSTART.md). ### Running Your First Mission 1. **Start the Dashboard** (optional, for monitoring): ```bash make dashboard # Or: python3 dashboard_v2.py # Access at http://localhost:5050 ``` 2. **Create a Mission**: - Via Dashboard: Click "Create Mission" and enter your objectives - Via Sample: Run `make sample-mission` to load a hello-world mission - Via JSON: Create `state/mission.json` manually 3. **Start the Engine**: ```bash make run # Or: python3 claude_autonomous.py --mode=rd ``` ### Development Commands Run `make help` to see all available commands: ```bash make install # Full installation make verify # Verify installation make dashboard # Start dashboard make run # Start autonomous agent make docker # Start with Docker make sample-mission # Load sample mission ``` ## Architecture ``` +-------------------+ | Mission State | | (mission.json) | +--------+----------+ | +--------------+--------------+ | | +---------v---------+ +--------v--------+ | AtlasForge | | Dashboard | | (Execution Engine)| | (Monitoring) | +---------+---------+ +-----------------+ | +---------v---------+ | R&D Engine | | (State Machine) | +---------+---------+ | +---------v-------------------+ | Stage Pipeline | | | | PLANNING -> BUILDING -> | | TESTING -> ANALYZING -> | | CYCLE_END -> COMPLETE | +-----------------------------+ ``` ## Mission Lifecycle 1. **PLANNING** - Understand objectives, research codebase, create implementation plan 2. **BUILDING** - Implement the solution 3. **TESTING** - Validate implementation 4. **ANALYZING** - Evaluate results, identify issues 5. **CYCLE_END** - Generate reports, prepare continuation 6. **COMPLETE** - Mission finished Missions can iterate through multiple cycles until success criteria are met. ## Core Components ### atlasforge.py Main execution loop. Spawns Claude instances, manages state, handles graceful shutdown. ### af_engine.py State machine for mission execution. Manages stages, enforces constraints, tracks progress. ### dashboard_v2.py Web-based monitoring interface showing mission status, knowledge base, and analytics. ### Knowledge Base SQLite database accumulating learnings across all missions: - Techniques discovered - Insights gained - Gotchas encountered - Reusable code patterns ### Adversarial Testing Separate Claude instances that test implementations: - RedTeam agents with no implementation knowledge - Mutation testing - Property-based testing ### GlassBox Post-mission introspection system: - Transcript parsing - Agent hierarchy reconstruction - Stage timeline visualization ## Key Features ### Mission Continuity Missions survive context window limits through: - Persistent mission.json state - Cycle-based iteration - Continuation prompts that preserve context ### Knowledge Accumulation Every mission adds to the knowledge base. The system improves over time as it learns patterns, gotchas, and techniques. ### Autonomous Operation Designed for unattended execution: - Graceful crash recovery - Stage checkpointing - Automatic cycle progression ## Directory Structure ``` AI-AtlasForge/ +-- atlasforge.py # Main entry point +-- af_engine.py # Stage state machine +-- dashboard_v2.py # Web dashboard +-- adversarial_testing/ # Testing framework +-- atlasforge_enhancements/ # Enhancement modules +-- workspace/ # Active workspace | +-- glassbox/ # Introspection tools | +-- artifacts/ # Plans, reports | +-- research/ # Notes, findings | +-- tests/ # Test scripts +-- state/ # Runtime state | +-- mission.json # Current mission | +-- claude_state.json # Execution state +-- missions/ # Mission workspaces +-- atlasforge_data/ | +-- knowledge_base/ # Accumulated learnings +-- logs/ # Execution logs ``` ## Configuration AI-AtlasForge uses environment variables for configuration: | Variable | Default | Description | |----------|---------|-------------| | `ATLASFORGE_PORT` | `5050` | Dashboard port | | `ATLASFORGE_ROOT` | (script directory) | Base directory | | `ATLASFORGE_DEBUG` | `false` | Enable debug logging | ## Dashboard Features The web dashboard provides real-time monitoring: - **Mission Status** - Current stage, progress, timing - **Activity Feed** - Live log of agent actions - **Knowledge Base** - Search and browse learnings - **Analytics** - Token usage, cost tracking - **Mission Queue** - Queue and schedule missions - **GlassBox** - Post-mission analysis ## Philosophy **First principles only.** No frameworks hiding integration failures. Every component built from scratch for full visibility. **Speed of machine, not human.** Designed for autonomous operation. Check in when convenient, not when required. **Knowledge accumulates.** Every mission adds to the knowledge base. The system gets better over time. **Trust but verify.** Adversarial testing catches what regular testing misses. The same agent that writes code doesn't validate it. ## Requirements - Python 3.10+ - Node.js 18+ (optional, for dashboard JS modifications) - Anthropic API key - Linux environment (Ubuntu 22.04+, Debian 12+) ### Python Dependencies See `requirements.txt` or `pyproject.toml` for full list. ## Documentation - [QUICKSTART.md](QUICKSTART.md) - Get started in 5 minutes - [INSTALL.md](INSTALL.md) - Detailed installation guide - [USAGE.md](USAGE.md) - How to use AI-AtlasForge - [ARCHITECTURE.md](ARCHITECTURE.md) - System architecture ## License MIT License - see [LICENSE](LICENSE) for details. ## Contributing Contributions are welcome! Please feel free to submit issues and pull requests. ## Related Projects - **[AI-AfterImage](https://github.com/DragonShadows1978/AI-AfterImage)** - Episodic memory for AI coding agents. Gives Claude Code persistent memory of code it has written across sessions. Works great with AtlasForge for cross-mission code recall. ## Acknowledgments Built on Claude by Anthropic. Special thanks to the Claude Code team for making autonomous AI development possible.
text/markdown
null
null
null
null
MIT
ai, claude, autonomous, research, development
[ "Development Status :: 4 - Beta", "Environment :: Console", "Environment :: Web Environment", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: POSIX :: Linux", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programmi...
[]
null
null
>=3.10
[]
[]
[]
[ "flask>=2.0.0", "flask-socketio>=5.0.0", "simple-websocket>=0.5.0", "anthropic>=0.18.0", "watchdog>=3.0.0", "psutil>=5.9.0", "numpy>=1.21.0", "scikit-learn>=1.0.0", "pytest>=7.0.0; extra == \"dev\"", "pytest-cov>=4.0.0; extra == \"dev\"", "black>=23.0.0; extra == \"dev\"", "flake8>=6.0.0; extr...
[]
[]
[]
[ "Homepage, https://github.com/DragonShadows1978/AI-AtlasForge", "Documentation, https://github.com/DragonShadows1978/AI-AtlasForge#readme", "Repository, https://github.com/DragonShadows1978/AI-AtlasForge.git", "Issues, https://github.com/DragonShadows1978/AI-AtlasForge/issues" ]
twine/6.2.0 CPython/3.12.3
2026-01-16T04:34:47.169623
ai_atlasforge-1.2.0-py3-none-any.whl
211,995
2b/36/4f49ea4de9807cb07f6251d7f1d5468b963cdead5ecbb8ff9d84eba4f9a5/ai_atlasforge-1.2.0-py3-none-any.whl
py3
bdist_wheel
null
false
5799e601ce7db9e59f61f4be59f8986b
053f830cc2705b3ec177067d95c93e50b72ee76db9b64d932375bdcabfdc4e25
2b364f49ea4de9807cb07f6251d7f1d5468b963cdead5ecbb8ff9d84eba4f9a5
null
[]
2.2
ai-atlasforge
1.2.0
Autonomous AI research and development platform powered by Claude
# AI-AtlasForge An autonomous AI research and development platform powered by Claude. Run long-duration missions, accumulate cross-session knowledge, and build software autonomously. ## What is AI-AtlasForge? AI-AtlasForge is not a chatbot wrapper. It's an **autonomous research engine** that: - Runs multi-day missions without human intervention - Maintains mission continuity across context windows - Accumulates knowledge that persists across sessions - Self-corrects when drifting from objectives - Adversarially tests its own outputs ## Quick Start ### Prerequisites - Python 3.10+ - Anthropic API key (get one at https://console.anthropic.com/) - Linux environment (tested on Ubuntu 22.04+, Debian 12+) > **Platform Notes:** > - **Windows:** Use WSL2 (Windows Subsystem for Linux) > - **macOS:** Should work but is untested. Please report issues. ### Option 1: Standard Installation ```bash # Clone the repository git clone https://github.com/DragonShadows1978/AI-AtlasForge.git cd AI-AtlasForge # Run the installer ./install.sh # Configure your API key export ANTHROPIC_API_KEY='your-key-here' # Or edit config.yaml / .env # Verify installation ./verify.sh ``` ### Option 2: One-Liner Install ```bash curl -sSL https://raw.githubusercontent.com/DragonShadows1978/AI-AtlasForge/main/quick_install.sh | bash ``` ### Option 3: Docker Installation ```bash git clone https://github.com/DragonShadows1978/AI-AtlasForge.git cd AI-AtlasForge docker compose up -d # Dashboard at http://localhost:5050 ``` For detailed installation options, see [INSTALL.md](INSTALL.md) or [QUICKSTART.md](QUICKSTART.md). ### Running Your First Mission 1. **Start the Dashboard** (optional, for monitoring): ```bash make dashboard # Or: python3 dashboard_v2.py # Access at http://localhost:5050 ``` 2. **Create a Mission**: - Via Dashboard: Click "Create Mission" and enter your objectives - Via Sample: Run `make sample-mission` to load a hello-world mission - Via JSON: Create `state/mission.json` manually 3. **Start the Engine**: ```bash make run # Or: python3 claude_autonomous.py --mode=rd ``` ### Development Commands Run `make help` to see all available commands: ```bash make install # Full installation make verify # Verify installation make dashboard # Start dashboard make run # Start autonomous agent make docker # Start with Docker make sample-mission # Load sample mission ``` ## Architecture ``` +-------------------+ | Mission State | | (mission.json) | +--------+----------+ | +--------------+--------------+ | | +---------v---------+ +--------v--------+ | AtlasForge | | Dashboard | | (Execution Engine)| | (Monitoring) | +---------+---------+ +-----------------+ | +---------v---------+ | R&D Engine | | (State Machine) | +---------+---------+ | +---------v-------------------+ | Stage Pipeline | | | | PLANNING -> BUILDING -> | | TESTING -> ANALYZING -> | | CYCLE_END -> COMPLETE | +-----------------------------+ ``` ## Mission Lifecycle 1. **PLANNING** - Understand objectives, research codebase, create implementation plan 2. **BUILDING** - Implement the solution 3. **TESTING** - Validate implementation 4. **ANALYZING** - Evaluate results, identify issues 5. **CYCLE_END** - Generate reports, prepare continuation 6. **COMPLETE** - Mission finished Missions can iterate through multiple cycles until success criteria are met. ## Core Components ### atlasforge.py Main execution loop. Spawns Claude instances, manages state, handles graceful shutdown. ### af_engine.py State machine for mission execution. Manages stages, enforces constraints, tracks progress. ### dashboard_v2.py Web-based monitoring interface showing mission status, knowledge base, and analytics. ### Knowledge Base SQLite database accumulating learnings across all missions: - Techniques discovered - Insights gained - Gotchas encountered - Reusable code patterns ### Adversarial Testing Separate Claude instances that test implementations: - RedTeam agents with no implementation knowledge - Mutation testing - Property-based testing ### GlassBox Post-mission introspection system: - Transcript parsing - Agent hierarchy reconstruction - Stage timeline visualization ## Key Features ### Mission Continuity Missions survive context window limits through: - Persistent mission.json state - Cycle-based iteration - Continuation prompts that preserve context ### Knowledge Accumulation Every mission adds to the knowledge base. The system improves over time as it learns patterns, gotchas, and techniques. ### Autonomous Operation Designed for unattended execution: - Graceful crash recovery - Stage checkpointing - Automatic cycle progression ## Directory Structure ``` AI-AtlasForge/ +-- atlasforge.py # Main entry point +-- af_engine.py # Stage state machine +-- dashboard_v2.py # Web dashboard +-- adversarial_testing/ # Testing framework +-- atlasforge_enhancements/ # Enhancement modules +-- workspace/ # Active workspace | +-- glassbox/ # Introspection tools | +-- artifacts/ # Plans, reports | +-- research/ # Notes, findings | +-- tests/ # Test scripts +-- state/ # Runtime state | +-- mission.json # Current mission | +-- claude_state.json # Execution state +-- missions/ # Mission workspaces +-- atlasforge_data/ | +-- knowledge_base/ # Accumulated learnings +-- logs/ # Execution logs ``` ## Configuration AI-AtlasForge uses environment variables for configuration: | Variable | Default | Description | |----------|---------|-------------| | `ATLASFORGE_PORT` | `5050` | Dashboard port | | `ATLASFORGE_ROOT` | (script directory) | Base directory | | `ATLASFORGE_DEBUG` | `false` | Enable debug logging | ## Dashboard Features The web dashboard provides real-time monitoring: - **Mission Status** - Current stage, progress, timing - **Activity Feed** - Live log of agent actions - **Knowledge Base** - Search and browse learnings - **Analytics** - Token usage, cost tracking - **Mission Queue** - Queue and schedule missions - **GlassBox** - Post-mission analysis ## Philosophy **First principles only.** No frameworks hiding integration failures. Every component built from scratch for full visibility. **Speed of machine, not human.** Designed for autonomous operation. Check in when convenient, not when required. **Knowledge accumulates.** Every mission adds to the knowledge base. The system gets better over time. **Trust but verify.** Adversarial testing catches what regular testing misses. The same agent that writes code doesn't validate it. ## Requirements - Python 3.10+ - Node.js 18+ (optional, for dashboard JS modifications) - Anthropic API key - Linux environment (Ubuntu 22.04+, Debian 12+) ### Python Dependencies See `requirements.txt` or `pyproject.toml` for full list. ## Documentation - [QUICKSTART.md](QUICKSTART.md) - Get started in 5 minutes - [INSTALL.md](INSTALL.md) - Detailed installation guide - [USAGE.md](USAGE.md) - How to use AI-AtlasForge - [ARCHITECTURE.md](ARCHITECTURE.md) - System architecture ## License MIT License - see [LICENSE](LICENSE) for details. ## Contributing Contributions are welcome! Please feel free to submit issues and pull requests. ## Related Projects - **[AI-AfterImage](https://github.com/DragonShadows1978/AI-AfterImage)** - Episodic memory for AI coding agents. Gives Claude Code persistent memory of code it has written across sessions. Works great with AtlasForge for cross-mission code recall. ## Acknowledgments Built on Claude by Anthropic. Special thanks to the Claude Code team for making autonomous AI development possible.
text/markdown
null
null
null
null
MIT
ai, claude, autonomous, research, development
[ "Development Status :: 4 - Beta", "Environment :: Console", "Environment :: Web Environment", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: POSIX :: Linux", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programmi...
[]
null
null
>=3.10
[]
[]
[]
[ "flask>=2.0.0", "flask-socketio>=5.0.0", "simple-websocket>=0.5.0", "anthropic>=0.18.0", "watchdog>=3.0.0", "psutil>=5.9.0", "numpy>=1.21.0", "scikit-learn>=1.0.0", "pytest>=7.0.0; extra == \"dev\"", "pytest-cov>=4.0.0; extra == \"dev\"", "black>=23.0.0; extra == \"dev\"", "flake8>=6.0.0; extr...
[]
[]
[]
[ "Homepage, https://github.com/DragonShadows1978/AI-AtlasForge", "Documentation, https://github.com/DragonShadows1978/AI-AtlasForge#readme", "Repository, https://github.com/DragonShadows1978/AI-AtlasForge.git", "Issues, https://github.com/DragonShadows1978/AI-AtlasForge/issues" ]
twine/6.2.0 CPython/3.12.3
2026-01-16T04:34:48.607559
ai_atlasforge-1.2.0.tar.gz
194,136
c4/09/52d2926ab2a63ed2076d87e6eae8dca6b43772e765436532bc74fc9c2a8e/ai_atlasforge-1.2.0.tar.gz
source
sdist
null
false
07b21f8025d438f559aeb7551a83ef8e
c3589dcbe9ab77cbc1a72da2fb6b00262a6472775275bfa097925afff3db0210
c40952d2926ab2a63ed2076d87e6eae8dca6b43772e765436532bc74fc9c2a8e
null
[]
2.4
kestrel
0.0.2
a fast, efficient inference engine for moondream
# Kestrel High-performance inference engine for the [Moondream](https://moondream.ai) vision-language model. Kestrel provides async, micro-batched serving with streaming support, paged KV caching, and optimized CUDA kernels. It's designed for production deployments where throughput and latency matter. ## Features - **Async micro-batching** — Cooperative scheduler batches heterogeneous requests without compromising per-request latency - **Streaming** — Real-time token streaming for query and caption tasks - **Multi-task** — Visual Q&A, captioning, point detection, object detection, and segmentation - **Paged KV cache** — Efficient memory management for high concurrency - **Prefix caching** — Radix tree-based caching for repeated prompts and images - **LoRA adapters** — Parameter-efficient fine-tuning support with automatic cloud loading ## Requirements - Python 3.10+ - NVIDIA Hopper GPU or newer (e.g. H100) - `MOONDREAM_API_KEY` environment variable (get this from [moondream.ai](https://moondream.ai)) ## Installation ```bash pip install kestrel huggingface_hub ``` ## Model Access The model weights are hosted on Hugging Face and require access approval: 1. Request access at [vikhyatk/moondream-next](https://huggingface.co/vikhyatk/moondream-next) 2. Once approved, authenticate with either: - `huggingface-cli login`, or - Set the `HF_TOKEN` environment variable ## Quick Start ```python import asyncio import pyvips from huggingface_hub import hf_hub_download from kestrel.config import RuntimeConfig from kestrel.engine import InferenceEngine async def main(): # Download the model (cached after first run) model_path = hf_hub_download( "vikhyatk/moondream-next", filename="model_fp8.pt", revision="1fdf7871dc596a89f73491db95543870727b5dce", ) # Configure the engine cfg = RuntimeConfig(model_path) # Create the engine (loads model and warms up) engine = await InferenceEngine.create(cfg) # Load an image image = pyvips.Image.new_from_file("photo.jpg") # Visual question answering result = await engine.query( image=image, question="What's in this image?", settings={"temperature": 0.2, "max_tokens": 512}, ) print(result.output["answer"]) # Clean up await engine.shutdown() asyncio.run(main()) ``` ## Tasks Kestrel supports several vision-language tasks through dedicated methods on the engine. ### Query (Visual Q&A) Ask questions about an image: ```python result = await engine.query( image=image, question="How many people are in this photo?", settings={ "temperature": 0.2, # Lower = more deterministic "top_p": 0.9, "max_tokens": 512, }, ) print(result.output["answer"]) ``` ### Caption Generate image descriptions: ```python result = await engine.caption( image, length="normal", # "short", "normal", or "long" settings={"temperature": 0.2, "max_tokens": 512}, ) print(result.output["caption"]) ``` ### Point Locate objects as normalized (x, y) coordinates: ```python result = await engine.point(image, "person") print(result.output["points"]) # [{"x": 0.5, "y": 0.3}, {"x": 0.8, "y": 0.4}] ``` Coordinates are normalized to [0, 1] where (0, 0) is top-left. ### Detect Detect objects as bounding boxes: ```python result = await engine.detect( image, "car", settings={"max_objects": 10}, ) print(result.output["objects"]) # [{"x_min": 0.1, "y_min": 0.2, "x_max": 0.5, "y_max": 0.6}, ...] ``` Bounding box coordinates are normalized to [0, 1]. ### Segment Generate a segmentation mask: ```python result = await engine.segment(image, "dog") seg = result.output["segments"][0] print(seg["svg_path"]) # SVG path data for the mask print(seg["bbox"]) # {"x_min": ..., "y_min": ..., "x_max": ..., "y_max": ...} ``` Note: Segmentation requires separate model weights. Contact [moondream.ai](https://moondream.ai) for access. ## Streaming For longer responses, you can stream tokens as they're generated: ```python image = pyvips.Image.new_from_file("photo.jpg") stream = await engine.query( image=image, question="Describe this scene in detail.", stream=True, settings={"max_tokens": 1024}, ) # Print tokens as they arrive async for chunk in stream: print(chunk.text, end="", flush=True) # Get the final result with metrics result = await stream.result() print(f"\n\nGenerated {result.metrics.output_tokens} tokens") ``` Streaming is supported for `query` and `caption` methods. ## Response Format All methods return an `EngineResult` with these fields: ```python result.output # Dict with task-specific output ("answer", "caption", "points", etc.) result.finish_reason # "stop" (natural end) or "length" (hit max_tokens) result.metrics # Timing and token counts ``` The `metrics` object contains: ```python result.metrics.input_tokens # Number of input tokens (including image) result.metrics.output_tokens # Number of generated tokens result.metrics.prefill_time_ms # Time to process input result.metrics.decode_time_ms # Time to generate output result.metrics.ttft_ms # Time to first token ``` ## Using Finetunes If you've created a finetuned model through the [Moondream API](https://moondream.ai), you can use it by passing the adapter ID: ```python result = await engine.query( image=image, question="What's in this image?", settings={"adapter": "01J5Z3NDEKTSV4RRFFQ69G5FAV@1000"}, ) ``` The adapter ID format is `{finetune_id}@{step}` where: - `finetune_id` is the ID of your finetune job - `step` is the training step/checkpoint to use Adapters are automatically downloaded and cached on first use. ## Configuration ### RuntimeConfig ```python RuntimeConfig( model_path="/path/to/model.pt", max_batch_size=8, # Max concurrent requests (default: 4) ) ``` ### Environment Variables | Variable | Description | |----------|-------------| | `MOONDREAM_API_KEY` | Required. Get this from [moondream.ai](https://moondream.ai). | ## License Free for evaluation and non-commercial use. Commercial use requires a license from [Moondream](https://moondream.ai). Copyright (c) 2024-2025 M87 Labs, Inc.
text/markdown
null
null
null
null
null
null
[]
[]
null
null
>=3.10
[]
[]
[]
[ "torch==2.9.1", "kestrel-kernels==0.1.0", "tokenizers>=0.15", "safetensors>=0.4", "transformers>=4.44", "pyvips>=2.3", "pyvips-binary>=2.48", "pillow>=10", "torch-c-dlpack-ext>=0.1.3", "starlette>=0.37", "httpx>=0.27", "uvicorn>=0.30", "flashinfer-python>=0.6.0", "opencv-python-headless>=4...
[]
[]
[]
[]
uv/0.9.0
2026-01-16T04:35:03.460595
kestrel-0.0.2-py3-none-any.whl
154,382
f4/42/4c0e5c1eb1d0ec456247af71aef37a41cdb93d364c233c29882db961df4a/kestrel-0.0.2-py3-none-any.whl
py3
bdist_wheel
null
false
1b616922ad9d5f86cfe90049a8a04a3b
cd5d13b56c0bdbacec866f5ac37a6e423417f44610e9320815279fdc78797ef6
f4424c0e5c1eb1d0ec456247af71aef37a41cdb93d364c233c29882db961df4a
null
[ "LICENSE.md" ]
2.4
kestrel
0.0.2
a fast, efficient inference engine for moondream
# Kestrel High-performance inference engine for the [Moondream](https://moondream.ai) vision-language model. Kestrel provides async, micro-batched serving with streaming support, paged KV caching, and optimized CUDA kernels. It's designed for production deployments where throughput and latency matter. ## Features - **Async micro-batching** — Cooperative scheduler batches heterogeneous requests without compromising per-request latency - **Streaming** — Real-time token streaming for query and caption tasks - **Multi-task** — Visual Q&A, captioning, point detection, object detection, and segmentation - **Paged KV cache** — Efficient memory management for high concurrency - **Prefix caching** — Radix tree-based caching for repeated prompts and images - **LoRA adapters** — Parameter-efficient fine-tuning support with automatic cloud loading ## Requirements - Python 3.10+ - NVIDIA Hopper GPU or newer (e.g. H100) - `MOONDREAM_API_KEY` environment variable (get this from [moondream.ai](https://moondream.ai)) ## Installation ```bash pip install kestrel huggingface_hub ``` ## Model Access The model weights are hosted on Hugging Face and require access approval: 1. Request access at [vikhyatk/moondream-next](https://huggingface.co/vikhyatk/moondream-next) 2. Once approved, authenticate with either: - `huggingface-cli login`, or - Set the `HF_TOKEN` environment variable ## Quick Start ```python import asyncio import pyvips from huggingface_hub import hf_hub_download from kestrel.config import RuntimeConfig from kestrel.engine import InferenceEngine async def main(): # Download the model (cached after first run) model_path = hf_hub_download( "vikhyatk/moondream-next", filename="model_fp8.pt", revision="1fdf7871dc596a89f73491db95543870727b5dce", ) # Configure the engine cfg = RuntimeConfig(model_path) # Create the engine (loads model and warms up) engine = await InferenceEngine.create(cfg) # Load an image image = pyvips.Image.new_from_file("photo.jpg") # Visual question answering result = await engine.query( image=image, question="What's in this image?", settings={"temperature": 0.2, "max_tokens": 512}, ) print(result.output["answer"]) # Clean up await engine.shutdown() asyncio.run(main()) ``` ## Tasks Kestrel supports several vision-language tasks through dedicated methods on the engine. ### Query (Visual Q&A) Ask questions about an image: ```python result = await engine.query( image=image, question="How many people are in this photo?", settings={ "temperature": 0.2, # Lower = more deterministic "top_p": 0.9, "max_tokens": 512, }, ) print(result.output["answer"]) ``` ### Caption Generate image descriptions: ```python result = await engine.caption( image, length="normal", # "short", "normal", or "long" settings={"temperature": 0.2, "max_tokens": 512}, ) print(result.output["caption"]) ``` ### Point Locate objects as normalized (x, y) coordinates: ```python result = await engine.point(image, "person") print(result.output["points"]) # [{"x": 0.5, "y": 0.3}, {"x": 0.8, "y": 0.4}] ``` Coordinates are normalized to [0, 1] where (0, 0) is top-left. ### Detect Detect objects as bounding boxes: ```python result = await engine.detect( image, "car", settings={"max_objects": 10}, ) print(result.output["objects"]) # [{"x_min": 0.1, "y_min": 0.2, "x_max": 0.5, "y_max": 0.6}, ...] ``` Bounding box coordinates are normalized to [0, 1]. ### Segment Generate a segmentation mask: ```python result = await engine.segment(image, "dog") seg = result.output["segments"][0] print(seg["svg_path"]) # SVG path data for the mask print(seg["bbox"]) # {"x_min": ..., "y_min": ..., "x_max": ..., "y_max": ...} ``` Note: Segmentation requires separate model weights. Contact [moondream.ai](https://moondream.ai) for access. ## Streaming For longer responses, you can stream tokens as they're generated: ```python image = pyvips.Image.new_from_file("photo.jpg") stream = await engine.query( image=image, question="Describe this scene in detail.", stream=True, settings={"max_tokens": 1024}, ) # Print tokens as they arrive async for chunk in stream: print(chunk.text, end="", flush=True) # Get the final result with metrics result = await stream.result() print(f"\n\nGenerated {result.metrics.output_tokens} tokens") ``` Streaming is supported for `query` and `caption` methods. ## Response Format All methods return an `EngineResult` with these fields: ```python result.output # Dict with task-specific output ("answer", "caption", "points", etc.) result.finish_reason # "stop" (natural end) or "length" (hit max_tokens) result.metrics # Timing and token counts ``` The `metrics` object contains: ```python result.metrics.input_tokens # Number of input tokens (including image) result.metrics.output_tokens # Number of generated tokens result.metrics.prefill_time_ms # Time to process input result.metrics.decode_time_ms # Time to generate output result.metrics.ttft_ms # Time to first token ``` ## Using Finetunes If you've created a finetuned model through the [Moondream API](https://moondream.ai), you can use it by passing the adapter ID: ```python result = await engine.query( image=image, question="What's in this image?", settings={"adapter": "01J5Z3NDEKTSV4RRFFQ69G5FAV@1000"}, ) ``` The adapter ID format is `{finetune_id}@{step}` where: - `finetune_id` is the ID of your finetune job - `step` is the training step/checkpoint to use Adapters are automatically downloaded and cached on first use. ## Configuration ### RuntimeConfig ```python RuntimeConfig( model_path="/path/to/model.pt", max_batch_size=8, # Max concurrent requests (default: 4) ) ``` ### Environment Variables | Variable | Description | |----------|-------------| | `MOONDREAM_API_KEY` | Required. Get this from [moondream.ai](https://moondream.ai). | ## License Free for evaluation and non-commercial use. Commercial use requires a license from [Moondream](https://moondream.ai). Copyright (c) 2024-2025 M87 Labs, Inc.
text/markdown
null
null
null
null
null
null
[]
[]
null
null
>=3.10
[]
[]
[]
[ "torch==2.9.1", "kestrel-kernels==0.1.0", "tokenizers>=0.15", "safetensors>=0.4", "transformers>=4.44", "pyvips>=2.3", "pyvips-binary>=2.48", "pillow>=10", "torch-c-dlpack-ext>=0.1.3", "starlette>=0.37", "httpx>=0.27", "uvicorn>=0.30", "flashinfer-python>=0.6.0", "opencv-python-headless>=4...
[]
[]
[]
[]
uv/0.9.0
2026-01-16T04:35:05.029261
kestrel-0.0.2.tar.gz
132,873
d7/32/cfb0ae81c7d572e4e1125bbb6392e513390c55a7161f294c0cfc64a19600/kestrel-0.0.2.tar.gz
source
sdist
null
false
53ba206f1d0302282f31da53cefce8f6
f01842a3fcaf244f939506231ddd24cee16dcc9de49f11e7de87758bbbf773d0
d732cfb0ae81c7d572e4e1125bbb6392e513390c55a7161f294c0cfc64a19600
null
[ "LICENSE.md" ]
2.4
pyhuntress
0.2.16
A full-featured Python client for the Huntress APIs
# pyhuntress - An API library for Huntress SIEM and Huntress Managed SAT, written in Python pyHuntress is a full-featured, type annotated API client written in Python for the Huntress APIs. This library has been developed with the intention of making the Huntress APIs simple and accessible to non-coders while allowing experienced coders to utilize all features the API has to offer without the boilerplate. pyHuntress currently supports both Huntress SIEM and Huntress Managed SAT products. Features: ========= - **100% API Coverage.** All endpoints and response models. - **Non-coder friendly.** 100% annotated for full IDE auto-completion. Clients handle requests and authentication - just plug the right details in and go! - **Fully annotated.** This library has a strong focus on type safety and type hinting. Models are declared and parsed using [Pydantic](https://github.com/pydantic/pydantic) pyHuntress is currently in **development**. Known Issues: ============= - As this project is still a WIP, documentation or code commentary may not always align. - Huntress Managed SAT post not built Road Map: ============= - Add support for post - Add required parameters when calling completion_certificat endpoint How-to: ============= - [Install](#install) - [Initializing the API Clients](#initializing-the-api-clients) - [Huntress Managed SAT](#huntress-managed-sat) - [Huntress SIEM](#huntress-siem) - [Working with Endpoints](#working-with-endpoints) - [Get many](#get-many) - [Get one](#get-one) - [Get with params](#get-with-params) - [Pagination](#pagination) - [Contributing](#contributing) - [Supporting the project](#supporting-the-project) # Install Open a terminal and run ```pip install pyhuntress``` # Initializing the API Clients ### Huntress Managed SAT ```python from pyhuntress import HuntressSATAPIClient # init client sat_api_client = HuntressSATAPIClient( mycurricula.com, # your api public key, # your api private key, ) ``` ### Huntress SIEM ```python from pyhuntress import HuntressSIEMAPIClient # init client siem_api_client = HuntressSIEMAPIClient( # huntress siem url # your api public key, # your api private key, ) ``` # Working with Endpoints Endpoints are 1:1 to what's available for both the Huntress Managed SAT and Huntress SIEM. For more information, check out the following resources: - [Huntress Managed SAT REST API Docs](https://curricula.stoplight.io/docs/curricula-api/00fkcnpgk5vnn-getting-started) - [Huntress SIEM REST API Docs](https://api.huntress.io/docs) ### Get many ```python ### Managed SAT ### # sends GET request to /company/companies endpoint companies = manage_api_client.company.companies.get() ### SIEM ### # sends GET request to /agents endpoint agents = siem_api_client.agents.get() ``` ### Get one ```python ### Managed SAT ### # sends GET request to /company/companies/{id} endpoint accounts = sat_api_client.accounts.id("abc123").get() ### SIEM ### # sends GET request to /agents/{id} endpoint agent = siem_api_client.agents.id(250).get() ``` ### Get with params ```python ### Managed SAT ### # sends GET request to /company/companies with a conditions query string conditional_company = sat_api_client.company.companies.get(params={ 'conditions': 'company/id=250' }) ### SIEM ### # sends GET request to /agents endpoint with a condition query string conditional_agent = siem_api_client.clients.get(params={ 'platform': 'windows' }) ``` # Pagination The Huntress SIEM API paginates data for performance reasons through the ```page``` and ```limit``` query parameters. ```limit``` is limited to a maximum of 500. To make working with paginated data easy, Endpoints that implement a GET response with an array also supply a ```paginated()``` method. Under the hood this wraps a GET request, but does a lot of neat stuff to make working with pages easier. Working with pagination ```python # initialize a PaginatedResponse instance for /agents, starting on page 1 with a pageSize of 100 paginated_agents = siem_api_client.agents.paginated(1,100) # access the data from the current page using the .data field page_one_data = paginated_agents.data # if there's a next page, retrieve the next page worth of data paginated_agents.get_next_page() # if there's a previous page, retrieve the previous page worth of data paginated_agents.get_previous_page() # iterate over all companies on the current page for agent in paginated_agents: # ... do things ... # iterate over all companies in all pages # this works by yielding every item on the page, then fetching the next page and continuing until there's no data left for agent in paginated_agents.all(): # ... do things ... ``` # Contributing Contributions to the project are welcome. If you find any issues or have suggestions for improvement, please feel free to open an issue or submit a pull request. # Supporting the project :heart: # Inspiration and Stolen Code The premise behind this came from the [pyConnectWise](https://github.com/HealthITAU/pyconnectwise) package and I stole **most** of the code and adapted it to the Huntress API endpoints. # How to Build > python -m build > python -m twine upload dist/*
text/markdown
null
Peter Annabel <peter.annabel@gmail.com>
null
null
null
API, Annotated, Client, Huntress, MSP, Manages SAT, Python, SIEM, Typed
[ "Intended Audience :: Developers", "License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Topic :: Software Development :: Libraries :: Python Modules" ]
[]
null
null
>=3.9
[]
[]
[]
[ "pydantic", "requests", "typing-extensions" ]
[]
[]
[]
[ "Homepage, https://github.com/brygphilomena/pyhuntress", "Issues, https://github.com/brygphilomena/pyhuntress/issues" ]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:35:50.475108
pyhuntress-0.2.16-py3-none-any.whl
89,237
95/9d/b091c449ad65ef20bc5b0c1ce5cb5d8d7d19d0f96695c7e8a2a4a07d056e/pyhuntress-0.2.16-py3-none-any.whl
py3
bdist_wheel
null
false
4737afbbc18cc0cf5f716ab19760fe43
2480ebf465e5e0729a3b319d1aa774354773505494dadc9385066d79f5e157dd
959db091c449ad65ef20bc5b0c1ce5cb5d8d7d19d0f96695c7e8a2a4a07d056e
GPL-3.0-only
[ "LICENSE" ]
2.4
pyhuntress
0.2.16
A full-featured Python client for the Huntress APIs
# pyhuntress - An API library for Huntress SIEM and Huntress Managed SAT, written in Python pyHuntress is a full-featured, type annotated API client written in Python for the Huntress APIs. This library has been developed with the intention of making the Huntress APIs simple and accessible to non-coders while allowing experienced coders to utilize all features the API has to offer without the boilerplate. pyHuntress currently supports both Huntress SIEM and Huntress Managed SAT products. Features: ========= - **100% API Coverage.** All endpoints and response models. - **Non-coder friendly.** 100% annotated for full IDE auto-completion. Clients handle requests and authentication - just plug the right details in and go! - **Fully annotated.** This library has a strong focus on type safety and type hinting. Models are declared and parsed using [Pydantic](https://github.com/pydantic/pydantic) pyHuntress is currently in **development**. Known Issues: ============= - As this project is still a WIP, documentation or code commentary may not always align. - Huntress Managed SAT post not built Road Map: ============= - Add support for post - Add required parameters when calling completion_certificat endpoint How-to: ============= - [Install](#install) - [Initializing the API Clients](#initializing-the-api-clients) - [Huntress Managed SAT](#huntress-managed-sat) - [Huntress SIEM](#huntress-siem) - [Working with Endpoints](#working-with-endpoints) - [Get many](#get-many) - [Get one](#get-one) - [Get with params](#get-with-params) - [Pagination](#pagination) - [Contributing](#contributing) - [Supporting the project](#supporting-the-project) # Install Open a terminal and run ```pip install pyhuntress``` # Initializing the API Clients ### Huntress Managed SAT ```python from pyhuntress import HuntressSATAPIClient # init client sat_api_client = HuntressSATAPIClient( mycurricula.com, # your api public key, # your api private key, ) ``` ### Huntress SIEM ```python from pyhuntress import HuntressSIEMAPIClient # init client siem_api_client = HuntressSIEMAPIClient( # huntress siem url # your api public key, # your api private key, ) ``` # Working with Endpoints Endpoints are 1:1 to what's available for both the Huntress Managed SAT and Huntress SIEM. For more information, check out the following resources: - [Huntress Managed SAT REST API Docs](https://curricula.stoplight.io/docs/curricula-api/00fkcnpgk5vnn-getting-started) - [Huntress SIEM REST API Docs](https://api.huntress.io/docs) ### Get many ```python ### Managed SAT ### # sends GET request to /company/companies endpoint companies = manage_api_client.company.companies.get() ### SIEM ### # sends GET request to /agents endpoint agents = siem_api_client.agents.get() ``` ### Get one ```python ### Managed SAT ### # sends GET request to /company/companies/{id} endpoint accounts = sat_api_client.accounts.id("abc123").get() ### SIEM ### # sends GET request to /agents/{id} endpoint agent = siem_api_client.agents.id(250).get() ``` ### Get with params ```python ### Managed SAT ### # sends GET request to /company/companies with a conditions query string conditional_company = sat_api_client.company.companies.get(params={ 'conditions': 'company/id=250' }) ### SIEM ### # sends GET request to /agents endpoint with a condition query string conditional_agent = siem_api_client.clients.get(params={ 'platform': 'windows' }) ``` # Pagination The Huntress SIEM API paginates data for performance reasons through the ```page``` and ```limit``` query parameters. ```limit``` is limited to a maximum of 500. To make working with paginated data easy, Endpoints that implement a GET response with an array also supply a ```paginated()``` method. Under the hood this wraps a GET request, but does a lot of neat stuff to make working with pages easier. Working with pagination ```python # initialize a PaginatedResponse instance for /agents, starting on page 1 with a pageSize of 100 paginated_agents = siem_api_client.agents.paginated(1,100) # access the data from the current page using the .data field page_one_data = paginated_agents.data # if there's a next page, retrieve the next page worth of data paginated_agents.get_next_page() # if there's a previous page, retrieve the previous page worth of data paginated_agents.get_previous_page() # iterate over all companies on the current page for agent in paginated_agents: # ... do things ... # iterate over all companies in all pages # this works by yielding every item on the page, then fetching the next page and continuing until there's no data left for agent in paginated_agents.all(): # ... do things ... ``` # Contributing Contributions to the project are welcome. If you find any issues or have suggestions for improvement, please feel free to open an issue or submit a pull request. # Supporting the project :heart: # Inspiration and Stolen Code The premise behind this came from the [pyConnectWise](https://github.com/HealthITAU/pyconnectwise) package and I stole **most** of the code and adapted it to the Huntress API endpoints. # How to Build > python -m build > python -m twine upload dist/*
text/markdown
null
Peter Annabel <peter.annabel@gmail.com>
null
null
null
API, Annotated, Client, Huntress, MSP, Manages SAT, Python, SIEM, Typed
[ "Intended Audience :: Developers", "License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Topic :: Software Development :: Libraries :: Python Modules" ]
[]
null
null
>=3.9
[]
[]
[]
[ "pydantic", "requests", "typing-extensions" ]
[]
[]
[]
[ "Homepage, https://github.com/brygphilomena/pyhuntress", "Issues, https://github.com/brygphilomena/pyhuntress/issues" ]
twine/6.1.0 CPython/3.13.7
2026-01-16T04:35:53.387406
pyhuntress-0.2.16.tar.gz
38,458
9d/ac/56a5cd974808db59e19dc3e68d3701702058af991598cd1c02623c0a8246/pyhuntress-0.2.16.tar.gz
source
sdist
null
false
ef5f9a785af41c4003451b1c128051de
06bbe8030acfb209b3190cd3c71ee47ccb0421b4cbe33c6cb2e14cee7281c56b
9dac56a5cd974808db59e19dc3e68d3701702058af991598cd1c02623c0a8246
GPL-3.0-only
[ "LICENSE" ]
2.1
AOT-biomaps
2.0.5
Acousto-Optic Tomography
null
null
Lucas Duclos
lucas.duclos@universite-paris-saclay.fr
null
null
null
null
[]
[]
https://github.com/LucasDuclos/AcoustoOpticTomography
null
null
[]
[]
[]
[ "k-wave-python", "setuptools", "pyyaml", "numba", "tqdm", "GPUtil", "scikit-image", "cupy; extra == \"gpu\"", "nvidia-ml-py3; extra == \"gpu\"" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.12.7
2025-06-16T09:56:17.720013
AOT_biomaps-2.0.5-py3-none-any.whl
41,368
e8/4a/ce12df5ea5481edc54cae22d51571f3d1b4a9f2bca4299f591d0df5ac29b/AOT_biomaps-2.0.5-py3-none-any.whl
py3
bdist_wheel
null
false
6bcac0c86923d10f979f944bc887d4df
a06deb623eed0e1bbdcebeab1f44aff584d8d7959176bc62883681920f8865aa
e84ace12df5ea5481edc54cae22d51571f3d1b4a9f2bca4299f591d0df5ac29b
null
[]
2.4
Adafruit-Blinka
8.60.3
CircuitPython APIs for non-CircuitPython versions of Python such as CPython on Linux and MicroPython.
Introduction ============ .. image:: https://readthedocs.org/projects/adafruit-micropython-blinka/badge/?version=latest :target: https://circuitpython.readthedocs.io/projects/blinka/en/latest/ :alt: Documentation Status .. image:: https://img.shields.io/discord/327254708534116352.svg :target: https://adafru.it/discord :alt: Discord .. image:: https://travis-ci.com/adafruit/Adafruit_Blinka.svg?branch=master :target: https://travis-ci.com/adafruit/Adafruit_Blinka :alt: Build Status .. image:: https://img.shields.io/badge/code%20style-black-000000.svg :target: https://github.com/psf/black :alt: Code Style: Black This repository contains a selection of packages emulating the CircuitPython API for devices or hosts running CPython or MicroPython. Working code exists to emulate these CircuitPython packages: * **analogio** - analog input/output pins, using pin identities from board+microcontroller packages * **bitbangio** - software-driven interfaces for I2C, SPI * **board** - breakout-specific pin identities * **busio** - hardware-driven interfaces for I2C, SPI, UART * **digitalio** - digital input/output pins, using pin identities from board+microcontroller packages * **keypad** - support for scanning keys and key matrices * **microcontroller** - chip-specific pin identities * **micropython** - MicroPython-specific module * **neopixel_write** - low-level interface to NeoPixels * **pulseio** - contains classes that provide access to basic pulse IO (PWM) * **pwmio** - contains classes that provide access to basic pulse IO (PWM) * **rainbowio** - provides the colorwheel() function * **usb_hid** - act as a hid-device using usb_gadget kernel driver For details, see the `Blinka API reference <https://circuitpython.readthedocs.io/projects/blinka/en/latest/index.html>`_. Dependencies ============= The emulation described above is intended to provide a CircuitPython-like API for devices which are running CPython or Micropython. Since corresponding packages should be built-in to any standard CircuitPython image, they have no value on a device already running CircuitPython and would likely conflict in unhappy ways. The test suites in the test/src folder under **testing.universal** are by design intended to run on *either* CircuitPython *or* CPython/Micropython+compatibility layer to prove conformance. Installing from PyPI ===================== On supported GNU/Linux systems like the Raspberry Pi, you can install the driver locally `from PyPI <https://pypi.org/project/Adafruit-Blinka/>`_. To install for current user: .. code-block:: shell pip3 install Adafruit-Blinka To install system-wide (this may be required in some cases): .. code-block:: shell sudo pip3 install Adafruit-Blinka To install in a virtual environment in your current project: .. code-block:: shell mkdir project-name && cd project-name python3 -m venv .env source .env/bin/activate pip3 install Adafruit-Blinka Usage Example ============= The pin names may vary by board, so you may need to change the pin names in the code. This example runs on the Raspberry Pi boards to blink an LED connected to GPIO 18 (Pin 12): .. code-block:: python import time import board import digitalio PIN = board.D18 print("hello blinky!") led = digitalio.DigitalInOut(PIN) led.direction = digitalio.Direction.OUTPUT while True: led.value = True time.sleep(0.5) led.value = False time.sleep(0.5) Contributing ============ Contributions are welcome! Please read our `Code of Conduct <https://github.com/adafruit/Adafruit_Blinka/blob/master/CODE_OF_CONDUCT.md>`_ before contributing to help this project stay welcoming. Building locally ================ Sphinx documentation ----------------------- Sphinx is used to build the documentation based on rST files and comments in the code. First, install dependencies (feel free to reuse the virtual environment from above): .. code-block:: shell python3 -m venv .env source .env/bin/activate pip install Sphinx sphinx-rtd-theme Adafruit-PlatformDetect Now, once you have the virtual environment activated: .. code-block:: shell cd docs sphinx-build -E -W -b html . _build/html This will output the documentation to ``docs/_build/html``. Open the index.html in your browser to view them. It will also (due to -W) error out on any warning like Travis will. This is a good way to locally verify it will pass.
text/x-rst
null
Adafruit Industries <circuitpython@adafruit.com>
null
null
MIT
null
[ "Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "Topic :: Software Development :: Libraries", "Topic :: System :: Hardware", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.7" ]
[]
null
null
null
[]
[]
[]
[ "Adafruit-PlatformDetect>=3.70.1", "Adafruit-PureIO>=1.1.7", "Jetson.GPIO; platform_machine == \"aarch64\"", "RPi.GPIO; platform_machine == \"armv7l\" or platform_machine == \"armv6l\" or platform_machine == \"aarch64\"", "rpi_ws281x>=4.0.0; platform_machine == \"armv7l\" or platform_machine == \"armv6l\" o...
[]
[]
[]
[ "Homepage, https://github.com/adafruit/Adafruit_Blinka" ]
twine/6.1.0 CPython/3.11.12
2025-06-19T19:06:57.779654
adafruit_blinka-8.60.3-py3-none-any.whl
390,305
5b/41/0700a963bb808a4c0a9b916a6cd90d73c79fe6a217f07935a920619d3aaa/adafruit_blinka-8.60.3-py3-none-any.whl
py3
bdist_wheel
null
false
8c5157304b9f06b2b9026af4ff440c06
312bcc46ddba472ed76f5221affcf02dcb1a25c65d8556c580701c03c7cc94a7
5b410700a963bb808a4c0a9b916a6cd90d73c79fe6a217f07935a920619d3aaa
null
[ "LICENSE" ]
2.1
CalcTool
0.1.3.4
Tools for calculations
```python sort(arr: List[Any], key: Callable[[Any], Any] = lambda x: x, reverse: bool = False) ``` 内省排序`Introsort`,结合了多种排序算法的优点,以确保在各种情况下都能获得高效的性能,不返回列表。`arr`,待排序的列表。`key`,用于比较的键函数,自定义排序规则,而不必修改原始数据。`reverse`,是否降序排列,默认为升序。 Introsort, which combines the advantages of multiple sorting algorithms to ensure efficient performance in all cases. Does not return a list. `arr` is the list to be sorted. `key` is a function to extract a comparison key from each element, allowing custom sorting without modifying the original data. `reverse` specifies whether to sort in descending order (default is ascending). ```python log(n, m, precision=50) ``` 精确计算以`m`为底`n`的对数,参照`math.log()`参数顺序。默认保留50位小数,若计算结果非常接近整数,函数会返回四舍五入后的整数结果。`m`为对数的底数(`int/float/Decimal`类型),`n`为真数(`int/float/Decimal`类型),`precision`为可选的计算精度参数(整数类型,默认50位小数)。 Accurately calculate the logarithm of `n` with base `m`. Follow the parameter order of `math.log()`. The result retains 50 decimal places by default. If the calculation result is very close to an integer, the function will return the rounded integer value. `m` is the base of the logarithm (of type `int/float/Decimal`), `n` is the argument (of type `int/float/Decimal`), and `precision` is an optional calculation precision parameter (integer type, defaulting to 50 decimal places). ```python: LaTeXCalculate(LaTeX_string: str) ``` 计算LaTeX表达式,返回float。`LaTeX_string`,LaTeX格式的数学表达式,字符串。 Evaluate LaTeX expression, return float. `LaTeX_string`(string), a mathematical expression in LaTeX format.
text/markdown
Zhu Chongjing
zhuchongjing_pypi@163.com
null
null
null
null
[ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent" ]
[]
null
null
null
[]
[]
[]
[]
[]
[]
[]
[]
twine/6.1.0 CPython/3.12.7
2025-06-28T02:08:06.906352
CalcTool-0.1.3.4-py3-none-any.whl
7,048
38/4a/0410600a695bbf470c78462f3e9881d5b235722fdcc7e505ddd6b3440815/CalcTool-0.1.3.4-py3-none-any.whl
py3
bdist_wheel
null
false
01600072198288806f1e99cfad5071d9
61b4544971a6495830edf2a531551c698ec5e318d5c78aa6775112e840ffdb10
384a0410600a695bbf470c78462f3e9881d5b235722fdcc7e505ddd6b3440815
null
[]
2.1
CalcTool
0.1.2.9
Tools for calculations
```python def sort(arr: List[Any], key: Callable[[Any], Any] = lambda x: x, reverse: bool = False) -> None: ``` 内省排序`Introsort`,结合了多种排序算法的优点,以确保在各种情况下都能获得高效的性能,不返回列表。`arr`,待排序的列表。`key`,用于比较的键函数,自定义排序规则,而不必修改原始数据。`reverse`,是否降序排列,默认为升序。 Introsort, which combines the advantages of multiple sorting algorithms to ensure efficient performance in all cases. Does not return a list. `arr` is the list to be sorted. `key` is a function to extract a comparison key from each element, allowing custom sorting without modifying the original data. `reverse` specifies whether to sort in descending order (default is ascending). ```python def log(n, m, precision=50): ``` 精确计算以`m`为底`n`的对数,参照`math.log()`参数顺序。默认保留50位小数,若计算结果非常接近整数,函数会返回四舍五入后的整数结果。`m`为对数的底数(`int/float/Decimal`类型),`n`为真数(`int/float/Decimal`类型),`precision`为可选的计算精度参数(整数类型,默认50位小数)。 Accurately calculate the logarithm of `n` with base `m`. Follow the parameter order of `math.log()`. The result retains 50 decimal places by default. If the calculation result is very close to an integer, the function will return the rounded integer value. `m` is the base of the logarithm (of type `int/float/Decimal`), `n` is the argument (of type `int/float/Decimal`), and `precision` is an optional calculation precision parameter (integer type, defaulting to 50 decimal places). ```python def calculate_latex(latex_expr, precision=10): ``` 计算LaTeX表达式,返回字符串。`latex_expr`,LaTeX格式的数学表达式,字符串。`precision`,计算结果的精度,默认为10位有效数字,整数。 Evaluate LaTeX expression, return string. `latex_expr`(`str`), a mathematical expression in LaTeX format. `precision`(`int`), the precision of the result, defaults to 10 significant digits.
text/markdown
Zhu Chongjing
zhuchongjing_pypi@163.com
null
null
null
null
[ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent" ]
[]
null
null
null
[]
[]
[]
[]
[]
[]
[]
[]
twine/6.1.0 CPython/3.12.7
2025-06-21T03:31:30.391722
CalcTool-0.1.2.9-py3-none-any.whl
7,427
1d/64/d9f661d222ddb2f805f1bf110dab0f3352a888b388bd3b051fae33f4d297/CalcTool-0.1.2.9-py3-none-any.whl
py3
bdist_wheel
null
false
d498e9b8251c7725cfb7a786a479cac5
8b5de4c69d55ef824b4b68ba218f271206ab1de482670e02d4eaa7421136b8ee
1d64d9f661d222ddb2f805f1bf110dab0f3352a888b388bd3b051fae33f4d297
null
[]
2.4
Clothoids
2.0.32
G1 and G2 fitting with clothoids, spline of clothoids, circle arc and biarc.
# Clothoids Python Bindings This directory contains the Python bindings for the Clothoids library. ## Installation Simply install via pip: ```bash pip install Clothoids ``` ### Build from source If a pip package isn't available, or you simply want to, you can build from source. Note that you need to have the following installed on your system: - ruby >=2.6 with the following gems: rake, colorize, rubyzip; - CMake with ninja; - python 3.8-13 (other versions are untested). Once those requirements are installed, simply: ```bash git clone --branch stable --depth 1 https://github.com/SebastianoTaddei/Clothoids.git cd Clothoids ruby setup.rb rake pip install -e . ``` ## Usage The Python bindings provide a simple interface to the Clothoids C++ library. Look at the `example.py` file for a simple example. ## Authors These binding were brought to you by: - [Sebastiano Taddei](https://github.com/SebastianoTaddei) - [Gabriele Masina](https://github.com/masinag)
text/markdown
Enrico Bertolazzi
null
null
null
null
null
[]
[]
null
null
null
[]
[]
[]
[]
[]
[]
[]
[]
twine/6.1.0 CPython/3.12.9
2025-06-05T13:40:55.882499
clothoids-2.0.32-cp312-cp312-macosx_11_0_arm64.whl
456,321
55/ec/e0d74c6d3339bf4909d9c0baf11d0528d4c53c9b101e0b5c766d728f5c96/clothoids-2.0.32-cp312-cp312-macosx_11_0_arm64.whl
cp312
bdist_wheel
null
false
d853759c8162743700aac7fcb66e94a5
8dd005bc35153d643b8e07a8ad2422415ed5f9268c38dd38f698f26cf76fb51d
55ece0d74c6d3339bf4909d9c0baf11d0528d4c53c9b101e0b5c766d728f5c96
null
[ "LICENSE" ]
2.4
EXOSIMS
3.6.0
null
![Alt text](EXOSIMS_cropped.png) Exoplanet Open-Source Imaging Mission Simulator <a href="http://ascl.net/1706.010"><img src="https://img.shields.io/badge/ascl-1706.010-blue.svg?colorB=262255" alt="ascl:1706.010" /></a> ![Build Status](https://github.com/dsavransky/EXOSIMS/actions/workflows/ci.yml/badge.svg) [![Documentation Status](https://readthedocs.org/projects/exosims/badge/?version=latest)](https://exosims.readthedocs.io/en/latest/?badge=latest) [![Coverage Status](https://coveralls.io/repos/github/dsavransky/EXOSIMS/badge.svg?branch=master)](https://coveralls.io/github/dsavransky/EXOSIMS?branch=master) [![PyPI version](https://badge.fury.io/py/EXOSIMS.svg)](https://badge.fury.io/py/EXOSIMS) [![astropy](http://img.shields.io/badge/powered%20by-AstroPy-orange.svg?style=flat)](http://www.astropy.org/) [![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg)](code_of_conduct.md) Quick Install -------------------------- Clone the repository, navigate to the top level directory (containing setup.py) and execute: ``` pip install -e . ``` Full installation and configuration instructions available here: https://exosims.readthedocs.io/en/latest/install.html Documentation and Quick Start Guide ----------------------------------------------------------- - https://exosims.readthedocs.io - https://exosims.readthedocs.io/en/latest/quickstart.html Additional EXOSIMS tutorials are available here: https://github.com/dsavransky/YieldModelingWorkshopTutorial This repository is associated with the Yield Modeling Tools Workshops. For additional information, see here: https://exoplanets.nasa.gov/exep/events/456/exoplanet-yield-modeling-tools-workshop/ Contributing ------------------------------------- All contributions are very welcome. Before starting on your first contribution to EXOSIMS, please read the [Contributing Guide](https://github.com/dsavransky/EXOSIMS/blob/master/CONTRIBUTING.md) Credits and Acknowledgements ------------------------------ Created by Dmitry Savransky Written by Christian Delacroix, Daniel Garrett, Dean Keithly, Gabriel Soto, Corey Spohn, Walker Dula, Sonny Rappaport, Michael Turmon, Rhonda Morgan, Grace Genszler, and Dmitry Savransky, with contributions by Patrick Lowrance, Ewan Douglas, Jackson Kulik, Jeremy Turner, Jayson Figueroa, Owen Sorber, Maxwell Zweig, Ahnika Gee, Claire Cahill, Saanika Choudhary, and Neil Zimmerman. EXOSIMS makes use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration, 2013). EXOSIMS optionally makes use of Forecaster (http://ascl.net/1701.007). EXOSIMS optionally makes use of NASA's Navigation and Ancillary Information Facility's SPICE system components (https://naif.jpl.nasa.gov/naif/). EXOSIMS optionally uses values from: Mamjek, E. "A Modern Mean Dwarf Stellar Color and Effective Temperature Sequence", http://www.pas.rochester.edu/~emamajek/EEM_dwarf_UBVIJHK_colors_Teff.txt, Version 2017.09.06 EXOSIMS development is supported by NASA Grant Nos. NNX14AD99G (GSFC), NNX15AJ67G (WPS) and NNG16PJ24C (SIT). For further information, please see EXOSIMS's ASCL page and the following papers: - http://adsabs.harvard.edu/abs/2016JATIS...2a1006S - http://adsabs.harvard.edu/abs/2016SPIE.9911E..19D
text/x-rst
Dmitry Savransky
ds264@cornell.edu
null
null
null
null
[]
[]
https://github.com/dsavransky/EXOSIMS
null
null
[]
[]
[]
[ "numpy>=1.20.0", "scipy>=1.7.2", "astropy>=6.0.0", "jplephem>=2.20.0", "ortools>=9.0", "h5py>=3.7.0", "astroquery>=0.4.8", "exo-det-box", "tqdm>=4.59", "pandas>=1.3", "MeanStars>=3.4.0", "synphot>=1.3.0", "ipyparallel>=8.0.0", "keplertools>=1.2.1" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.10.17
2025-06-17T14:38:02.272094
exosims-3.6.0-py3-none-any.whl
17,361,802
fe/32/e5671237cc88d36e38e578ddc90e2f4c5a9396e218ea89cf0f1341dfc817/exosims-3.6.0-py3-none-any.whl
py3
bdist_wheel
null
false
9ef292d206def499b3047474bfcb5e1c
436563c207f106e5c7ae8f051dbb6d1d4c0fd55809adfeef38f13ee30f726bb9
fe32e5671237cc88d36e38e578ddc90e2f4c5a9396e218ea89cf0f1341dfc817
null
[ "LICENSE" ]
2.4
F1-lap-time-telementary
0.1.1
Plot F1 telemetry data using FastF1
# F1_lap_time_telementary This module is created to easily visualize telementary data of the fastest lap from a session. It heavily uses the great FastF1 API below - https://github.com/theOehrly/Fast-F1/tree/master This document goes through all available API functions, if you just want a quick way to plot then review the `plot_comparison` function # Installation ```console pip install fastf1plot ``` # API Functions ## setup_plotting Calls `fastf1.plotting.setup_mpl` from FastF1 to configure the plot ### Inputs No inputs ### Output Setup matplotlib for use with fastf1 - Nothing returned from the function ### Usage ```python from F1_lap_time_telementary import setup_plotting setup_plotting() ``` ## get_session_data Gets the session data from FastF1 ### Inputs year - Year of the race as an integer gp - The full name of the gp as a string session_type - Abbreviated session names, mapped as the following: | session_type | Session | | --------------- | ------------------| | 'FP1' | Free Practice 1 | | 'FP2' | Free Practice 2 | | 'FP3' | Free Practice 3 | | 'SQ' | Sprint Qualifying | | 'SS' | Sprint Shootout | | 'Q' | Qualifying | | 'S' | Sprint | | 'R' | Race | **Note Sprint Shootout is for 2023 only** ### Outputs session - Returns a session object ### Usage ```python from F1_lap_time_telementary import get_session_data year = 2025 grand_prix = 'Chinese Grand Prix' session_type='Q' session = get_session_data(year, grand_prix, session_type) ``` ## get_driver_data Returns the telementary data from the driver's fastest lap ### Inputs session - A session object (which can be obtained by running `get_session_data`) driver - Three letter abbreviation for a driver. E.g. 'BOT' for Bottas ### Output lap - The fastest lap set by the driver in that session car_data - telementary data from the fastest lap ### Usage ```python from F1_lap_time_telementary import get_driver_data session = get_session_data(2025, 'Chinese Grand Prix', 'Q') driver = 'ALO' lap, car_data = get_driver_data(session, driver) ``` ## get_min_max_speed Gets the minimum and maximum speed by the driver ### Inputs car_data - Telementary Data from the car (which can be obtained by running `get_driver_data`) ### Outputs min_speed - Slowest speed of the car in a lap max_speed - Top speed of the car in a lap ### Usage ```python from F1_lap_time_telementary import get_min_max_speed min_speed, max_speed = get_min_max_speed(car_data) ``` ## plot_corners Plots corner lines in a speed-distance plot ### Inputs ax - Axes circuit_data - Circuit Data (Obtained by `session.get_circuit_info`) car_data - Telementary Data from the car (which can be obtained by running `get_driver_data`) ### Outputs No output - Adds vertical dashed lines on a speed-distance plot ### Usage ```python from F1_lap_time_telementary import get_min_max_speed plot_corners(axs[0], circuit_data, car_data) ``` ## plot_telemetry Plots telementry data 4 subplots - 1) Speed-Distance 2) Throttle-Distance 3) Brake-Distance 4) Gear-Distance ### Inputs axs - The axes you are plotting on car_data - Telementary Data from the car (which can be obtained by running `get_driver_data`) label - The driver's trace that is being plotted ### Outputs Nothing returned - plots a (4,1) subplot ### Usage ```python from F1_lap_time_telementary import plot_telemetry lap, car_data = get_driver_data('Q', 'ALO') plot_telemetry(axs, car_data, lap)
text/markdown
null
Sidhartha Kumar <siddkumar718@gmail.com>
null
null
null
null
[]
[]
null
null
null
[]
[]
[]
[ "fastf1>=3.0.0", "matplotlib>=3.0" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.4
2025-06-10T14:06:14.925444
f1_lap_time_telementary-0.1.1-py3-none-any.whl
3,345
73/6b/5799cceb4d6fbd24295c3e7e39652761c040b8804f90feb36e4b44ccd4bd/f1_lap_time_telementary-0.1.1-py3-none-any.whl
py3
bdist_wheel
null
false
753e83b928d62c91c858fb41d42007d9
77451e0c98d5ec367728920dc9863ecd00686bdcafc0bf51669cdce001e76989
736b5799cceb4d6fbd24295c3e7e39652761c040b8804f90feb36e4b44ccd4bd
MIT
[ "LICENSE" ]
2.4
FELDSHAM
0.0.4
Реализация схемы разделения секрета Фельдмана-Шамира
My package description
text/markdown
Alexander
lloollfox@mail.ru
null
null
MIT
null
[]
[]
null
null
null
[]
[]
[]
[ "sympy>=1.7" ]
[]
[]
[]
[ "Source Repository, https://github.com/A-Sharan1/" ]
twine/6.1.0 CPython/3.13.5
2025-06-16T13:02:32.394356
feldsham-0.0.4-py3-none-any.whl
6,771
ee/ed/b643e81a2ee97cc1c34a2f26770bff88f6c636bbb55ad4b7b758e7f5934a/feldsham-0.0.4-py3-none-any.whl
py3
bdist_wheel
null
false
5cf850cd5d866e687ecabbd5b27af13b
6a703174735b88d3ad37e3c53528f9cc3ab92cec7029eb0846e49495c9ff4f53
eeedb643e81a2ee97cc1c34a2f26770bff88f6c636bbb55ad4b7b758e7f5934a
null
[ "LICENSE.txt" ]
2.4
FPC
0.80
Frank's Personal Conllection
Frank's Personal Conllection
null
Frank
luoziluojun@126.com
Frank
luoziluojun@126.com
BSD License
null
[ "Development Status :: 4 - Beta", "Operating System :: OS Independent", "Intended Audience :: Developers", "License :: OSI Approved :: BSD License", "Programming Language :: Python :: 3", "Topic :: Software Development :: Libraries" ]
[ "all" ]
http://ff2.pw
null
null
[]
[]
[]
[]
[]
[]
[]
[]
twine/6.0.1 CPython/3.12.6
2025-06-10T09:40:43.120302
fpc-0.80-py3-none-any.whl
8,388
33/7f/7f0b3ed378c3a7f2630052eaae171e4a5f7f66eabc32039705f0ca17424d/fpc-0.80-py3-none-any.whl
py3
bdist_wheel
null
false
c52cd9c3a0c792ff3e97645af25c0b2b
4d8c56d20baa99ad3627df458ecd9c835999316315a7485418b2722b0e9286de
337f7f0b3ed378c3a7f2630052eaae171e4a5f7f66eabc32039705f0ca17424d
null
[]
2.4
Flask-Bauto
0.0.15
null
# Flask-Bauto Flask automated blueprints based on dataclasses
text/markdown
null
null
null
null
null
null
[]
[]
null
null
null
[]
[]
[]
[ "Flask-FEFset", "Flask-UXFab", "Flask-SQLAlchemy", "Flask-IAM", "Bull-Stack; extra == \"fullstack\"", "build; extra == \"dev\"", "twine; extra == \"dev\"" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.12.0rc3
2025-06-01T16:12:09.043942
flask_bauto-0.0.15.tar.gz
10,663
c0/ce/d7e0086af0c0be07528af5168d3b1a0ac83c6462b11f27fcebc68fa84735/flask_bauto-0.0.15.tar.gz
source
sdist
null
false
4a2984d79831f97504e57e5bb3c4fd05
f7021313fc709206978d946b185c328ed26bb5caef12a153afc55257fab1a4ed
c0ced7e0086af0c0be07528af5168d3b1a0ac83c6462b11f27fcebc68fa84735
null
[ "LICENSE" ]
2.4
FloTorch-core
2.9.7
A Python project for FloTorch
# 🚀 FloTorch-core **FloTorch-core** is a modular and extensible Python framework for building LLM-powered RAG (Retrieval-Augmented Generation) pipelines. It offers plug-and-play components for embeddings, chunking, retrieval, gateway-based LLM calls, and RAG evaluation. --- ## ✨ Features - 🧩 Text Chunking (Fixed-size, Hierarchical) - 🧠 Embedding Models (Titan, Cohere, Bedrock) - 🔍 Document Retrieval (OpenSearch + Vector Storage) - 💻 Bedrock/sagemaker/gateway inferencer - 🔌 Unified LLM Gateway (OpenAI, Bedrock, Ollama, etc.) - 📏 RAG Evaluation (RAGAS Metrics) - ☁️ AWS Integration (S3, DynamoDB, Lambda) - 🧢 Built-in Testing Support --- ## 📆 Installation ```bash pip install FloTorch-core ``` To install development dependencies: ```bash pip install FloTorch-core[dev] ``` --- ## 📂 Project Structure ``` flotorch/ ├── inferencer/ # LLM gateway/bedrock/sagemaker interface ├── embedding/ # Embedding models ├── chunking/ # Text chunking logic ├── evaluator/ # RAG evaluation (RAGAS) ├── storage/ # Vector DB, S3, DynamoDB ├── util/ # Utilities and helpers ├── rerank/ # Ranking documents ├── guardrails/ # Enabling guardrails ├── reader/ # reader for json/pdf ``` --- ## 📖 Usage Example ### Reader ``` from flotorch_core.reader.json_reader import JSONReader from flotorch_core.storage.s3_storage import S3StorageProvider json_reader = JSONReader(S3StorageProvider(<S3 bucket>)) json_reader.read(<path>) ``` ### Embedding ``` from flotorch_core.embedding.embedding_registry import embedding_registry embedding_class = embedding_registry.get_model(<model id>) # model id example: amazon.titan-text-express-v1, amazon.titan-embed-text-v2:0, cohere.embed-multilingual-v3 ``` ### Vector storage (opensearch) ``` from flotorch_core.storage.db.vector.open_search import OpenSearchClient vector_storage_object = OpenSearchClient( <opensearch_host>, <opensearch_port>, <opensearch_username>, <opensearch_password>, <index_id>, <embedding object> ) ``` ### Vector storage (bedrock knowledgebase) ``` from flotorch_core.storage.db.vector.bedrock_knowledgebase_storage import BedrockKnowledgeBaseStorage vector_storage_object = BedrockKnowledgeBaseStorage( knowledge_base_id=<knowledge_base_id>, region=<aws_region> ) ``` ### Guardrails over vector storage ``` from flotorch_core.storage.db.vector.guardrails_vector_storage import GuardRailsVectorStorage base_guardrails = BedrockGuardrail(<guardrail_id>, <guardrail_version>, <aws_region>) vector_storage_object = GuardRailsVectorStorage( vector_storage_object, base_guardrails, <enable_prompt_guardrails(True/False)>, <enable_context_guardrails(True/False)> ) ``` ### Inferencer ``` from flotorch_core.inferencer.bedrock_inferencer import BedrockInferencer from flotorch_core.inferencer.gateway_inferencer import GatewayInferencer from flotorch_core.inferencer.sagemaker_inferencer import SageMakerInferencer inferencer = BedrockInferencer( <model_id>, <region>, <number of n_shot_prompts>, <temperature>, <n_shot_prompt_guide_obj> ) inferencer = GatewayInferencer( model_id=<model_id>, api_key=<api_key>, base_url=<base_url>, n_shot_prompts=<n_shot_prompts>, n_shot_prompt_guide_obj=<n_shot_prompt_guide_obj> ) inferencer = SageMakerInferencer( <model_id>, <region>, <arn_role>, <n_shot_prompts>, <temperature>, <n_shot_prompt_guide_obj> ) ``` ### GuardRail over inferencer ``` from flotorch_core.inferencer.guardrails.guardrails_inferencer import GuardRailsInferencer inferencer = GuardRailsInferencer(inferencer, base_guardrails) ``` --- ## 📬 Maintainer **Shiva Krishna** 📧 Email: shiva.krishnaah@gmail.com **Adil Raza** 📧 Email: adilraza.9752@gmail.com --- ## 📄 License This project is licensed under the [MIT License](LICENSE). --- ## 🌐 Links - GitHub: [https://github.com/FissionAI/flotorch-core](https://github.com/FissionAI/flotorch-core)
text/markdown
null
Shiva Krishna <shiva.krishnaah@gmail.com>
null
null
MIT
null
[]
[]
null
null
null
[]
[]
[]
[ "langchain==0.3.14", "boto3==1.36.2", "ollama==0.4.6", "PyPDF2==3.0.1", "opensearch-py==2.8.0", "sagemaker==2.235.2", "openai==1.57.4", "ragas==0.2.14", "psycopg2-binary==2.9.9", "requests==2.31.0", "pytest==8.3.4; extra == \"dev\"", "testcontainers==4.9.0; extra == \"dev\"", "minio==7.2.15;...
[]
[]
[]
[]
twine/6.1.0 CPython/3.9.0
2025-06-04T07:18:26.138484
flotorch_core-2.9.7-py3-none-any.whl
91,819
f2/97/23dbf814b497600fa83e7e594898d6aeb03336f4dc1712d0669aef1fc59a/flotorch_core-2.9.7-py3-none-any.whl
py3
bdist_wheel
null
false
ecf3881ea4c773853d6f0db9a1d8a2ba
2aac299bd031b2c977b502b5851b18aeb5f428efede4359dce3951d7eb7719dc
f29723dbf814b497600fa83e7e594898d6aeb03336f4dc1712d0669aef1fc59a
null
[ "LICENSE" ]
2.4
FloTorch-core
2.9.11
A Python project for FloTorch
# 🚀 FloTorch-core **FloTorch-core** is a modular and extensible Python framework for building LLM-powered RAG (Retrieval-Augmented Generation) pipelines. It offers plug-and-play components for embeddings, chunking, retrieval, gateway-based LLM calls, and RAG evaluation. --- ## ✨ Features - 🧩 Text Chunking (Fixed-size, Hierarchical) - 🧠 Embedding Models (Titan, Cohere, Bedrock) - 🔍 Document Retrieval (OpenSearch + Vector Storage) - 💻 Bedrock/sagemaker/gateway inferencer - 🔌 Unified LLM Gateway (OpenAI, Bedrock, Ollama, etc.) - 📏 RAG Evaluation (RAGAS Metrics) - ☁️ AWS Integration (S3, DynamoDB, Lambda) - 🧢 Built-in Testing Support --- ## 📆 Installation ```bash pip install FloTorch-core ``` To install development dependencies: ```bash pip install FloTorch-core[dev] ``` --- ## 📂 Project Structure ``` flotorch/ ├── inferencer/ # LLM gateway/bedrock/sagemaker interface ├── embedding/ # Embedding models ├── chunking/ # Text chunking logic ├── evaluator/ # RAG evaluation (RAGAS) ├── storage/ # Vector DB, S3, DynamoDB ├── util/ # Utilities and helpers ├── rerank/ # Ranking documents ├── guardrails/ # Enabling guardrails ├── reader/ # reader for json/pdf ``` --- ## 📖 Usage Example ### Reader ``` from flotorch_core.reader.json_reader import JSONReader from flotorch_core.storage.s3_storage import S3StorageProvider json_reader = JSONReader(S3StorageProvider(<S3 bucket>)) json_reader.read(<path>) ``` ### Embedding ``` from flotorch_core.embedding.embedding_registry import embedding_registry embedding_class = embedding_registry.get_model(<model id>) # model id example: amazon.titan-text-express-v1, amazon.titan-embed-text-v2:0, cohere.embed-multilingual-v3 ``` ### Vector storage (opensearch) ``` from flotorch_core.storage.db.vector.open_search import OpenSearchClient vector_storage_object = OpenSearchClient( <opensearch_host>, <opensearch_port>, <opensearch_username>, <opensearch_password>, <index_id>, <embedding object> ) ``` ### Vector storage (bedrock knowledgebase) ``` from flotorch_core.storage.db.vector.bedrock_knowledgebase_storage import BedrockKnowledgeBaseStorage vector_storage_object = BedrockKnowledgeBaseStorage( knowledge_base_id=<knowledge_base_id>, region=<aws_region> ) ``` ### Guardrails over vector storage ``` from flotorch_core.storage.db.vector.guardrails_vector_storage import GuardRailsVectorStorage base_guardrails = BedrockGuardrail(<guardrail_id>, <guardrail_version>, <aws_region>) vector_storage_object = GuardRailsVectorStorage( vector_storage_object, base_guardrails, <enable_prompt_guardrails(True/False)>, <enable_context_guardrails(True/False)> ) ``` ### Inferencer ``` from flotorch_core.inferencer.bedrock_inferencer import BedrockInferencer from flotorch_core.inferencer.gateway_inferencer import GatewayInferencer from flotorch_core.inferencer.sagemaker_inferencer import SageMakerInferencer inferencer = BedrockInferencer( <model_id>, <region>, <number of n_shot_prompts>, <temperature>, <n_shot_prompt_guide_obj> ) inferencer = GatewayInferencer( model_id=<model_id>, api_key=<api_key>, base_url=<base_url>, n_shot_prompts=<n_shot_prompts>, n_shot_prompt_guide_obj=<n_shot_prompt_guide_obj> ) inferencer = SageMakerInferencer( <model_id>, <region>, <arn_role>, <n_shot_prompts>, <temperature>, <n_shot_prompt_guide_obj> ) ``` ### GuardRail over inferencer ``` from flotorch_core.inferencer.guardrails.guardrails_inferencer import GuardRailsInferencer inferencer = GuardRailsInferencer(inferencer, base_guardrails) ``` --- ## 📬 Maintainer **Shiva Krishna** 📧 Email: shiva.krishnaah@gmail.com **Adil Raza** 📧 Email: adilraza.9752@gmail.com --- ## 📄 License This project is licensed under the [MIT License](LICENSE). --- ## 🌐 Links - GitHub: [https://github.com/FissionAI/flotorch-core](https://github.com/FissionAI/flotorch-core)
text/markdown
null
Shiva Krishna <shiva.krishnaah@gmail.com>
null
null
MIT
null
[]
[]
null
null
null
[]
[]
[]
[ "langchain==0.3.14", "boto3==1.36.2", "ollama==0.4.6", "PyPDF2==3.0.1", "opensearch-py==2.8.0", "sagemaker==2.235.2", "openai==1.57.4", "ragas==0.2.14", "psycopg2-binary==2.9.9", "requests<3.0.0,>=2.31.0", "pytest==8.3.4; extra == \"dev\"", "testcontainers==4.9.0; extra == \"dev\"", "minio==...
[]
[]
[]
[]
twine/6.1.0 CPython/3.9.0
2025-06-20T06:28:05.008089
flotorch_core-2.9.11.tar.gz
59,989
af/c8/34d3f4d3ec113063f65dc159bdcd20986ede979e449bb23372a647508a87/flotorch_core-2.9.11.tar.gz
source
sdist
null
false
6ee583d6915e5bef449a766c7b6a1240
7f713b54e5396b0d7b32197b85b1a090d2b10435e7f108356f9919e05f8b12c8
afc834d3f4d3ec113063f65dc159bdcd20986ede979e449bb23372a647508a87
null
[ "LICENSE" ]
2.4
FloTorch-core
2.9.8
A Python project for FloTorch
# 🚀 FloTorch-core **FloTorch-core** is a modular and extensible Python framework for building LLM-powered RAG (Retrieval-Augmented Generation) pipelines. It offers plug-and-play components for embeddings, chunking, retrieval, gateway-based LLM calls, and RAG evaluation. --- ## ✨ Features - 🧩 Text Chunking (Fixed-size, Hierarchical) - 🧠 Embedding Models (Titan, Cohere, Bedrock) - 🔍 Document Retrieval (OpenSearch + Vector Storage) - 💻 Bedrock/sagemaker/gateway inferencer - 🔌 Unified LLM Gateway (OpenAI, Bedrock, Ollama, etc.) - 📏 RAG Evaluation (RAGAS Metrics) - ☁️ AWS Integration (S3, DynamoDB, Lambda) - 🧢 Built-in Testing Support --- ## 📆 Installation ```bash pip install FloTorch-core ``` To install development dependencies: ```bash pip install FloTorch-core[dev] ``` --- ## 📂 Project Structure ``` flotorch/ ├── inferencer/ # LLM gateway/bedrock/sagemaker interface ├── embedding/ # Embedding models ├── chunking/ # Text chunking logic ├── evaluator/ # RAG evaluation (RAGAS) ├── storage/ # Vector DB, S3, DynamoDB ├── util/ # Utilities and helpers ├── rerank/ # Ranking documents ├── guardrails/ # Enabling guardrails ├── reader/ # reader for json/pdf ``` --- ## 📖 Usage Example ### Reader ``` from flotorch_core.reader.json_reader import JSONReader from flotorch_core.storage.s3_storage import S3StorageProvider json_reader = JSONReader(S3StorageProvider(<S3 bucket>)) json_reader.read(<path>) ``` ### Embedding ``` from flotorch_core.embedding.embedding_registry import embedding_registry embedding_class = embedding_registry.get_model(<model id>) # model id example: amazon.titan-text-express-v1, amazon.titan-embed-text-v2:0, cohere.embed-multilingual-v3 ``` ### Vector storage (opensearch) ``` from flotorch_core.storage.db.vector.open_search import OpenSearchClient vector_storage_object = OpenSearchClient( <opensearch_host>, <opensearch_port>, <opensearch_username>, <opensearch_password>, <index_id>, <embedding object> ) ``` ### Vector storage (bedrock knowledgebase) ``` from flotorch_core.storage.db.vector.bedrock_knowledgebase_storage import BedrockKnowledgeBaseStorage vector_storage_object = BedrockKnowledgeBaseStorage( knowledge_base_id=<knowledge_base_id>, region=<aws_region> ) ``` ### Guardrails over vector storage ``` from flotorch_core.storage.db.vector.guardrails_vector_storage import GuardRailsVectorStorage base_guardrails = BedrockGuardrail(<guardrail_id>, <guardrail_version>, <aws_region>) vector_storage_object = GuardRailsVectorStorage( vector_storage_object, base_guardrails, <enable_prompt_guardrails(True/False)>, <enable_context_guardrails(True/False)> ) ``` ### Inferencer ``` from flotorch_core.inferencer.bedrock_inferencer import BedrockInferencer from flotorch_core.inferencer.gateway_inferencer import GatewayInferencer from flotorch_core.inferencer.sagemaker_inferencer import SageMakerInferencer inferencer = BedrockInferencer( <model_id>, <region>, <number of n_shot_prompts>, <temperature>, <n_shot_prompt_guide_obj> ) inferencer = GatewayInferencer( model_id=<model_id>, api_key=<api_key>, base_url=<base_url>, n_shot_prompts=<n_shot_prompts>, n_shot_prompt_guide_obj=<n_shot_prompt_guide_obj> ) inferencer = SageMakerInferencer( <model_id>, <region>, <arn_role>, <n_shot_prompts>, <temperature>, <n_shot_prompt_guide_obj> ) ``` ### GuardRail over inferencer ``` from flotorch_core.inferencer.guardrails.guardrails_inferencer import GuardRailsInferencer inferencer = GuardRailsInferencer(inferencer, base_guardrails) ``` --- ## 📬 Maintainer **Shiva Krishna** 📧 Email: shiva.krishnaah@gmail.com **Adil Raza** 📧 Email: adilraza.9752@gmail.com --- ## 📄 License This project is licensed under the [MIT License](LICENSE). --- ## 🌐 Links - GitHub: [https://github.com/FissionAI/flotorch-core](https://github.com/FissionAI/flotorch-core)
text/markdown
null
Shiva Krishna <shiva.krishnaah@gmail.com>
null
null
MIT
null
[]
[]
null
null
null
[]
[]
[]
[ "langchain==0.3.14", "boto3==1.36.2", "ollama==0.4.6", "PyPDF2==3.0.1", "opensearch-py==2.8.0", "sagemaker==2.235.2", "openai==1.57.4", "ragas==0.2.14", "psycopg2-binary==2.9.9", "requests<3.0.0,>=2.31.0", "pytest==8.3.4; extra == \"dev\"", "testcontainers==4.9.0; extra == \"dev\"", "minio==...
[]
[]
[]
[]
twine/6.1.0 CPython/3.9.0
2025-06-05T06:31:33.900117
flotorch_core-2.9.8-py3-none-any.whl
91,826
c4/2a/3bec58e8cdab71b4577a500c19a747ac47a418ca173a81cfd08cc10d8d7d/flotorch_core-2.9.8-py3-none-any.whl
py3
bdist_wheel
null
false
680baa0cfe46e9f0298b951b528aa31c
8899133152318e1d62115199e450040ae87207e45880a1016883da5aea39edfd
c42a3bec58e8cdab71b4577a500c19a747ac47a418ca173a81cfd08cc10d8d7d
null
[ "LICENSE" ]
2.4
GPopt
0.9.2
Bayesian Optimization using Gaussian Process Regression
Bayesian Optimization using Gaussian Process Regression
null
Thierry Moudiki
thierry.moudiki@gmail.com
null
null
BSD
null
[ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "Programming Language :: Python :: 3" ]
[]
https://github.com/thierrymoudiki/GPopt
https://github.com/thierrymoudiki/GPopt/tarball/0.9.2
null
[]
[]
[]
[ "joblib", "matplotlib", "nnetsauce", "numpy", "pandas", "scipy", "scikit-learn", "threadpoolctl", "tqdm" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.12.9
2025-06-15T02:05:24.322268
gpopt-0.9.2.tar.gz
74,654
d5/c2/878eca3481810bef5c30c11228d78a47d8187b578a226eb97894dd707d51/gpopt-0.9.2.tar.gz
source
sdist
null
false
20b6870ea663300feeb5527e5067e4ae
086d0b0d0e355b33ce238fd0a40012401142f43a0d996a3f4c346cbaa5877a1d
d5c2878eca3481810bef5c30c11228d78a47d8187b578a226eb97894dd707d51
null
[ "LICENSE" ]
2.1
HardView
0.1.0
A Python library for collecting hardware information (BIOS, System, CPU, RAM, Disk, Network).
A comprehensive Python library for querying low-level hardware information on Windows and Linux, including BIOS, system, CPU, RAM, disk drives, and network adapters. Data is returned in JSON format.
null
gafoo
omarwaled3374@gmail.com
null
null
null
null
[ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: Microsoft :: Windows", "Operating System :: POSIX :: Linux", "Topic :: System :: Hardware", "Development Status :: 3 - Alpha" ]
[]
https://github.com/gafoo173/HardView
null
null
[]
[]
[]
[]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.2
2025-06-17T08:52:04.680984
hardview-0.1.0-cp38-cp38-win_amd64.whl
15,065
81/2d/72a6487a66436e6108a14822dd47c00f8ff3e5324f84ebf2230912a6ea4d/hardview-0.1.0-cp38-cp38-win_amd64.whl
cp38
bdist_wheel
null
false
918f79b1fc19c054fe4ee25c339ff7a6
269607fc643acffd1a3f76e3957ade23d7c842b2a7446110f9dadedeb130cfb0
812d72a6487a66436e6108a14822dd47c00f8ff3e5324f84ebf2230912a6ea4d
null
[]
2.4
HelperFunctionsLiam167
0.1.4
Reusable Plotly/Colab helpers for analytics
null
null
Liam Crowley
null
null
null
null
null
[]
[]
null
null
null
[]
[]
[]
[ "plotly", "pandas" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.11.9
2025-06-17T14:43:06.222038
helperfunctionsliam167-0.1.4-py3-none-any.whl
6,376
e9/d1/3a15cdaefa81176c77c609237376a56b4ea74c6391c2bab7e2374b7ffd86/helperfunctionsliam167-0.1.4-py3-none-any.whl
py3
bdist_wheel
null
false
8cf55087a5801f707615122094c21450
0948f10aa94bf14b4776494f755f3445ad6f11a3b39d9acd7e5ed41c5bd20944
e9d13a15cdaefa81176c77c609237376a56b4ea74c6391c2bab7e2374b7ffd86
null
[]
2.4
HyTechMaster-STT
0.1
this is speech to text packages created by arman rahtore
null
null
Arman Rathore
null
null
null
null
null
[]
[]
null
null
null
[]
[]
[]
[]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.5
2025-06-28T18:53:47.743630
hytechmaster_stt-0.1.tar.gz
1,765
6c/60/20dcb53ad103fbb496470eff28a6521cc7e5ffc09600b7a476ac04d3864e/hytechmaster_stt-0.1.tar.gz
source
sdist
null
false
f6ba1baf9c1408ef717c6bf7293bd85c
79491c3f4b5bb16880e853e91bcc3f09b31462b80cb7a1ab8570380ac6183f4e
6c6020dcb53ad103fbb496470eff28a6521cc7e5ffc09600b7a476ac04d3864e
null
[]
2.2
IsoSpecPy
2.3.0.dev1
IsoSpecPy is a Python library for computing isotopic distributions of molecules.
null
null
null
null
null
null
null
null
[]
[]
null
null
null
[]
[]
[]
[ "cffi" ]
[]
[]
[]
[]
twine/5.1.1 CPython/3.12.10
2025-06-05T12:19:31.126746
isospecpy-2.3.0.dev1-cp311-cp311-musllinux_1_2_x86_64.whl
1,168,747
00/9f/30aa56e88fdd952ef2c7d4789761245193fe7915cd5e26f1e26320c7a789/isospecpy-2.3.0.dev1-cp311-cp311-musllinux_1_2_x86_64.whl
cp311
bdist_wheel
null
false
e029c7917d2220464ca21e147b3b5231
7c8cb984711c4e35a6391e08bbe3097ed29f856fd37be25d6e38ff44e7bf3556
009f30aa56e88fdd952ef2c7d4789761245193fe7915cd5e26f1e26320c7a789
null
[]
2.2
IsoSpecPy
2.3.0.dev1
IsoSpecPy is a Python library for computing isotopic distributions of molecules.
null
null
null
null
null
null
null
null
[]
[]
null
null
null
[]
[]
[]
[ "cffi" ]
[]
[]
[]
[]
twine/5.1.1 CPython/3.12.10
2025-06-05T12:20:49.173919
isospecpy-2.3.0.dev1-cp38-cp38-win_amd64.whl
73,579
81/6e/8d46db3591d52ac57f133cbddd413f9a0367b73769df64c981a141c36846/isospecpy-2.3.0.dev1-cp38-cp38-win_amd64.whl
cp38
bdist_wheel
null
false
98fbf821d7e3fde53ffb4feea2beb67f
46bf95ecbb3215d5601a90eefb54eac3b34dfada3bf878db47cd98b3338376db
816e8d46db3591d52ac57f133cbddd413f9a0367b73769df64c981a141c36846
null
[]
2.2
IsoSpecPy
2.3.0.dev1
IsoSpecPy is a Python library for computing isotopic distributions of molecules.
null
null
null
null
null
null
null
null
[]
[]
null
null
null
[]
[]
[]
[ "cffi" ]
[]
[]
[]
[]
twine/5.1.1 CPython/3.12.10
2025-06-05T12:19:08.746830
isospecpy-2.3.0.dev1-cp310-cp310-win32.whl
72,839
54/af/f2db9a80516ad622ce7dfe93daefb49971a265692c6db416576d3221ff0f/isospecpy-2.3.0.dev1-cp310-cp310-win32.whl
cp310
bdist_wheel
null
false
2ce6f547e487f3be976be6e683925e96
b4c181daf4120349e2dbd588b361802b83f744f0cd55f8f009e5066c852e56c4
54aff2db9a80516ad622ce7dfe93daefb49971a265692c6db416576d3221ff0f
null
[]
2.4
LaplaPy
0.2.1
Symbolic derivative & Laplace transform with step-by-step output
# LaplaPy: Advanced Symbolic Laplace Transform Analysis **Scientific Computing Package for Differential Equations, System Analysis, and Control Theory** A comprehensive Python library for symbolic Laplace transforms with rigorous mathematical foundations, designed for engineers, scientists, and researchers. --- ## Overview `LaplaPy` provides a powerful symbolic computation environment for: 1. **Time-domain analysis**: Derivatives, integrals, and function manipulation 2. **Laplace transforms**: With rigorous Region of Convergence (ROC) determination 3. **System analysis**: Pole-zero identification, stability analysis, and frequency response 4. **ODE solving**: Complete solution of linear differential equations with initial conditions 5. **Control system tools**: Bode plots, time-domain responses, and transfer function analysis --- ## Key Features - **Mathematical Rigor**: Implements Laplace transform theory with proper ROC analysis - **Causal System Modeling**: Automatic handling of Heaviside functions for physical systems - **Step-by-Step Solutions**: Educational mode for learning complex concepts - **Comprehensive System Analysis**: Pole-zero identification, stability criteria, frequency response - **ODE Solver**: Complete solution workflow for linear differential equations - **Visualization Tools**: Bode plot generation and time-domain simulations --- ## Installation ```bash pip install LaplaPy ``` For development: ```bash git clone https://github.com/4211421036/LaplaPy.git cd LaplaPy pip install -e .[dev] ``` --- ## Quickstart ### Basic Operations ```python from LaplaPy import LaplaceOperator, t, s # Initialize with expression (causal system by default) op = LaplaceOperator("exp(-3*t) + sin(2*t)", show_steps=True) # Compute derivative d1 = op.derivative(order=1) # Laplace transform with ROC analysis F_s, roc, poles, zeros = op.laplace() # Inverse Laplace transform f_t = op.inverse_laplace() ``` ### ODE Solving ```python from sympy import Eq, Function, Derivative, exp # Define a differential equation f = Function('f')(t) ode = Eq(Derivative(f, t, t) + 3*Derivative(f, t) + 2*f, exp(-t)) # Solve with initial conditions solution = op.solve_ode(ode, {0: 0, 1: 1}) # f(0)=0, f'(0)=1 ``` ### System Analysis ```python # Frequency response magnitude, phase = op.frequency_response() # Time-domain response to input response = op.time_domain_response("sin(4*t)") # Generate Bode plot data omega, mag_db, phase_deg = op.bode_plot(ω_range=(0.1, 100), points=100) ``` --- ## CLI Usage ```bash LaplaPy "exp(-2*t)*sin(3*t)" --laplace --deriv 2 LaplaPy "s/(s**2 + 4)" --inverse LaplaPy "Derivative(f(t), t, t) + 4*f(t) = exp(-t)" --ode --ic "f(0)=0" "f'(0)=1" ``` **Options**: - `--deriv N`: Compute Nth derivative - `--laplace`: Compute Laplace transform - `--inverse`: Compute inverse Laplace transform - `--ode`: Solve ODE (provide equation) - `--ic`: Initial conditions (e.g., "f(0)=0", "f'(0)=1") - `--causal/--noncausal`: System causality assumption - `--quiet`: Suppress step-by-step output --- ## Mathematical Foundations ### Laplace Transform $$\mathcal{L}\{f(t)\}(s) = \int_{0^-}^{\infty} e^{-st} f(t) dt$$ ### Derivative Property $$\mathcal{L}\{f^{(n)}(t)\} = s^n F(s) - \sum_{k=1}^{n} s^{n-k} f^{(k-1)}(0^+)$$ ### Region of Convergence - For causal systems: Re(s) > σ_max (right-half plane) - Proper ROC determination for stability analysis ### Pole-Zero Analysis - Transfer function: $H(s) = \frac{N(s)}{D(s)}$ - Poles: Roots of denominator polynomial - Zeros: Roots of numerator polynomial ### Frequency Response $$H(j\omega) = H(s)\big|_{s=j\omega} = |H(j\omega)| e^{j\angle H(j\omega)}$$ --- ## Examples ### Second-Order System Analysis ```python op = LaplaceOperator("1/(s**2 + 0.6*s + 1)", show_steps=True) # Get poles and zeros F_s, roc, poles, zeros = op.laplace() # Frequency response magnitude, phase = op.frequency_response() # Bode plot data omega, mag_db, phase_deg = op.bode_plot(ω_range=(0.1, 10), points=200) ``` ### Circuit Analysis (RLC Network) ```python # Define circuit equation: L*di/dt + R*i + 1/C*∫i dt = V_in L, R, C = 0.5, 4, 0.25 op = LaplaceOperator("V_in(s)", show_steps=True) # Impedance representation Z = L*s + R + 1/(C*s) current = op.time_domain_response("V_in(s)/" + str(Z)) # Response to step input step_response = current.subs("V_in(s)", "1/s") ``` --- ## Development & Testing ```bash # Run tests pytest tests/ # Generate documentation cd docs make html # Contribution guidelines CONTRIBUTING.md ``` --- ## Scientific Applications 1. **Control Systems**: Stability analysis, controller design 2. **Circuit Analysis**: RLC networks, filter design 3. **Vibration Engineering**: Damped oscillator analysis 4. **Signal Processing**: System response characterization 5. **Communication Systems**: Filter design, modulation analysis 6. **Mechanical Systems**: Spring-mass-damper modeling --- ## Documentation Wiki Full documentation available at: [LaplaPy Documentation WiKi](https://github.com/4211421036/LaplaPy/wiki) Includes: - Mathematical background - API reference - Tutorial notebooks - Application examples --- ## License MIT License --- ## Cite This Work ```bibtex @software{LaplaPy, author = {GALIH RIDHO UTOMO}, title = {LaplaPy: Advanced Symbolic Laplace Transform Analysis}, year = {2025}, publisher = {GitHub}, howpublished = {\url{https://github.com/4211421036/LaplaPy}} } ```
text/markdown
null
GALIH RIDHO UTOMO <g4lihru@students.unnes.ac.id>
null
null
MIT
null
[ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License" ]
[]
null
null
null
[]
[]
[]
[ "sympy>=1.10" ]
[]
[]
[]
[ "Homepage, https://github.com/4211421036/LaplaPy", "Repository, https://github.com/4211421036/LaplaPy" ]
twine/6.1.0 CPython/3.10.17
2025-06-16T18:52:11.032259
laplapy-0.2.1-py3-none-any.whl
12,346
5d/f2/235188128b170e4e2e0aa4c742577c216f4cd2a7d3b191c4998eb8c9fbfe/laplapy-0.2.1-py3-none-any.whl
py3
bdist_wheel
null
false
d568d5f7827aafeae0f3b5f33d826e49
dd6019e30cdb7e8a1388145fa35dc69a6a86ab15c06a8493b6ad4a89e5d1c704
5df2235188128b170e4e2e0aa4c742577c216f4cd2a7d3b191c4998eb8c9fbfe
null
[ "LICENSE" ]
2.1
LinguAligner
1.0.3
LinguAligner is a Python library for aligning annotations in parallel corpora. It is designed to be used in the context of parallel corpora annotation alignment, where the goal is to align annotations in the source language with annotations in the target language.
<p align="center"> <img src="img/lingualigner.png" alt="LinguAligner Logo" width="300"/> </p> **LinguAligner** is a Python package for automatically translating annotated corpora while preserving their annotations. It supports multiple translation APIs and alignment strategies, making it a valuable tool for NLP researchers building multilingual datasets, particularly for low-resource languages. Natural Language Processing (NLP) research remains heavily centered on English, creating a language imbalance in AI. One way to improve linguistic diversity is by adapting annotated corpora from high-resource languages to others. However, preserving span-based annotation quality after translation requires precise alignment of annotations between the source and translated texts, a challenging task due to lexical, syntactic and semantic divergences between languages. **LinguAligner** provides an automated pipeline to align annotations within translated texts using a several annotation alignment strategies. ## 🚀 Features - 🌐 **Translation Module**: Supports external translation services: - Google Translate - Microsoft Translator - DeepL - 🧠 **Annotation Alignment Module**: Implements multiple techniques: - **Exact / Fuzzy Matching**: Levenshtein, Gestalt - **Lemmatization-based Matching** using [spaCy](https://spacy.io/) - **Pre-compiled Translation Dictionaries** via Microsoft Lookup API - **Multilingual Contextual Embeddings** using [BERT-multilingual](https://huggingface.co/bert-base-multilingual-uncased) The pipeline operates sequentially, meaning that annotations aligned by earlier methods are not addressed by subsequent pipeline elements. According to our experiments, the list above corresponds to the best order sequence. ## 📦 Installation Install via [PyPI](https://pypi.org/project/LinguAligner/): ```bash pip install LinguAligner ``` ## 🧪 Example Usage ### 1. Translate Corpora You can use the Translation APIs or can translate your corpus with an external tool (an API key is needed). ```python from LinguAligner import translation # Google Translate translator = translation.GoogleTranslator(source_lang="en", target_lang="pt", key="Google_KEY") translated_text = translator.translate("The soldiers were ordered to fire their weapons") # DeepL translator = translation.DeepLTranslator(source_lang="en", target_lang="pt", key="DEEPL_KEY") translated_text = translator.translate("The soldiers were ordered to fire their weapons") # Microsoft translator = translation.MicrosoftTranslator(source_lang="en", target_lang="pt", key="MICROSOFT_KEY") translated_text = translator.translate("The soldiers were ordered to fire their weapons") print(translated_text) ``` ### 2. Align Annotations Users can select the aligner strategies they intend to use and specify the order in which they should be utilized. According to our findings, the best sequence order is the ones presented in the example below, however, we encourage you to experiment with different orders for your specific use case. ```python from LinguAligner import AlignmentPipeline # Define pipeline and model configuration config = { "pipeline": ["lemma", "M_trans", "word_aligner", "gestalt", "levenshtein"], "spacy_model": "pt_core_news_lg", "WAligner_model": "bert-base-multilingual-uncased" } aligner = AlignmentPipeline(config) # Source and translated data src_sent = "The soldiers land on the shore..." src_ann = "land" trans_sent = "Os soldados aterraram na costa." trans_ann = "terra" # Expected direct translation # Perform annotation alignment target_annotation = aligner.align_annotation( src_sent, src_ann, trans_sent, trans_ann ) print(target_annotation) # Output: ('aterraram', (12, 21)) ``` In this example, the word `land` is translated to `terra` (land as a noun) when considered in isolation, but as `aterraram` (land as a verb) when translated in context. Although `terra` is a valid translation of the annotation, it does not occur in the translated sentence and therefore cannot be aligned. Such misalignments highlight the need for additional processing to determine the correct annotation offsets in the translated text, in this case, mapping the word `terra` to `aterraram` . ## 🔧 Configuration You can customize the alignment behavior in the `config` variable: ```python config = { "pipeline": ["lemma", "word_aligner", "levenshtein"], # change pipeline elements and order "spacy_model": "fr_core_news_md", # change spacy model "WAligner_model": "bert-base-multilingual-uncased" # change multilingual model } ``` ## 🔧 Advanced Options ### Specify source annotation index to resolve ambiguity (Multiple Source Matches) ```python src_sent = "he was a good man because he had a kind heart" src_ann = "he" trans_sent = "ele era um bom homem porque ele tinha um bom coração" trans_ann = "ele" target_annotation = aligner.align_annotation( src_sent, src_ann, trans_sent, trans_ann, src_ann_start=29 ) print(target_annotation) # Output: ('ele', (28, 30)) ``` ### Using the M_trans Method The `M_trans` method relies on having multiple possible translations for each annotation. These must be prepared in advance and stored in a Python dictionary, where each key is a source annotation and the value is a list of alternative translations. You can generate this translation dictionary using the Microsoft Translator API (requires a MICROSOFT_TRANSLATOR_KEY): ```python from LinguAligner import translation translator = translation.MicrosoftTranslator( source_lang="en", target_lang="pt", auth_key="MICROSOFT_TRANSLATOR_KEY" ) annotations_list = ["war", "land", "fire"] lookup_table = {} for word in annotations_list: lookup_table[word] = translator.getMultipleTranslations(word) # Use the lookup table in align_annotation aligner.align_annotation( "The soldiers were ordered to fire their weapons", "fire", "Os soldados receberam ordens para disparar as suas armas", "incêndio", M_trans_dict=lookup_table ) ``` #### 🔎 Example output of a lookup table: ```python { "fire": [ "fogo", "incêndio", "demitir", "despedir", "fogueira", "disparar", "chamas", "dispare", "lareira", "atirar", "atire" ] } ``` ## 📚 Use Cases LinguAligner was used to create translated versions of the following annotated corpora: - **ACE-2005** (EN → PT): Event extraction benchmark, now available in Portuguese via the [LDC](https://catalog.ldc.upenn.edu/LDC2024T05) - **T2S LUSA** (PT → EN): Portuguese news event corpus adapted to English [10.25747/ESFS-1P16](https://doi.org/10.25747/ESFS-1P16) - **MAVEN**: (EN → PT) High-coverage event trigger corpus from Wikipedia translated to Portuguese (available in this repository) - **WikiEvents**: (EN → PT) Document-level event extraction dataset translated to Portuguese (available in this repository) ## 🧩 Citation ### Coming soon...
text/markdown
null
lfc <lfc@di.uminho.pt>
null
null
null
null
[ "License :: OSI Approved :: MIT License" ]
[]
null
null
null
[]
[ "LinguAligner" ]
[]
[ "transformers[torch]", "spacy", "deep_translator", "deepl", "requests", "uuid", "fuzzywuzzy" ]
[]
[]
[]
[ "Homepage, https://github.com/lfcc1/LinguAligner", "Issues, https://github.com/lfcc1/LinguAligner/issues" ]
python-requests/2.32.3
2025-06-18T00:04:36.930098
LinguAligner-1.0.3.tar.gz
5,104,277
bb/09/1e74e7b9bd0ec2ce9bce79cc2abe536a24f9f0fd4f4ed2fab9627e8edc91/LinguAligner-1.0.3.tar.gz
source
sdist
null
false
bb38db49855f43686020c2fe8a1cb331
00eb280982d889cefe458023b65cc714b3d8b2d79acbfb9e877da94b679ba31c
bb091e74e7b9bd0ec2ce9bce79cc2abe536a24f9f0fd4f4ed2fab9627e8edc91
null
[]
2.4
MBbankchecker
0.2.5
bankchecker base on MBBank of The DT
null
null
JussKynn
null
null
null
null
null
[]
[]
null
null
null
[]
[]
[]
[ "Pillow", "requests", "aiohttp", "setuptools", "wheel", "wasmtime", "mb-capcha-ocr" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.3
2025-06-10T12:29:42.309825
mbbankchecker-0.2.5-py3-none-any.whl
26,170
d9/3a/ac08855419e3bf603d7f647fc422d1d1f104bf01edab73f4d84a38d2e5c8/mbbankchecker-0.2.5-py3-none-any.whl
py3
bdist_wheel
null
false
e2b128ae89b3fcf4f64583c95676a5ba
ba9dae1c00fc72a53d73eb3832420cb0359d1beee7dea125b9844da19382dcdf
d93aac08855419e3bf603d7f647fc422d1d1f104bf01edab73f4d84a38d2e5c8
null
[ "LICENSE.txt" ]
2.1
ManimTool
0.1.0.2
A package for tools in Manim
# 导入 Import ```python from ManimTool import * ``` 依赖库(Requires):`manim` ***发现任何bug或问题,请反馈到tommy1008@dingtalk.com,谢谢!** ***If you find any bugs or issues, please report them to tommy1008@dingtalk.com, thank you!** 了解更多详情,请前往[Manim Community](https://www.manim.community)。 For more details, visit [Manim Community](https://www.manim.community). ## 公式与图形 Formulas and Graphics ```python def ChineseMathTex(*texts, font="SimSun", tex_to_color_map={}, **kwargs): ``` **创建中文数学公式。** 在此函数的公式部分和`tex_to_color_map`中直接写入中文即可,无需包裹`\text{}`,返回`MathTex`。`font`,设置公式中的中文字体。所有原版参数都可使用。 **Creates Chinese mathematical formulas.** You can directly write Chinese characters in the formula part of this function and in `tex_to_color_map` without wrapping them in `\text{}`. Returns a `MathTex` object. The `font` parameter sets the Chinese font for the formula. All original parameters can be used. ```python def YellowLine(**kwargs): ``` **创建黄色的`Line`。** 所有原版参数都可使用。 ****Creates a yellow ``Line``.**** All original parameters can be used. ```python def LabelDot(dot_label, dot_pos, label_pos=DOWN, buff=0.1): ``` **创建一个带有名字的点,返回带有点和名字的`VGroup`。** `dot_label`,点的名字,字符串。`dot_pos`,点的位置,位置列表`[x,y,z]`。`label_pos`,点的名字相对于点的位置,Manim中的八个方向。`buff`,点的名字与点的间距,数值。 ****Creates a point with a name. Returns a ``VGroup`` containing the point and its name.**** `dot_label` is the name of the point (a string). `dot_pos` is the position of the point (a list `[x, y, z]`). `label_pos` is the position of the label relative to the point (one of the eight directions in Manim). `buff` is the spacing between the label and the point (a numerical value). ```python def MathTexLine(mathtex: MathTex, direction=UP, buff=0.5, **kwargs): def MathTexBrace(mathtex: MathTex, direction=UP, buff=0.5, **kwargs): def MathTexDoublearrow(mathtex: MathTex, direction=UP, buff=0.5, **kwargs): ``` **创建可以标注内容的图形,返回带有图形和标注内容的`VGroup`。** `mathtex`,标注的公式,`MathTex`类型。`direction`,标注内容相对于线的位置,Manim中的八个方向。`buff`,标注内容与图形的间距,数值。图形的所有原版参数都可使用。 **Creates a graphical annotation for a MathTex object. Returns a `VGroup` containing the graphic and the annotation.** `mathtex` is the MathTex object to annotate. `direction` is the position of the annotation relative to the graphic (using Manim's direction constants). `buff` is the spacing between the graphic and the annotation. All original parameters of the underlying graphic can be used. ```python def ExtendedLine(line: Line, extend_distance: float) -> Line: ``` **将一条线延长`extend_distance`的距离,返回延长后的`Line`。** `line`,`Line`类型。`extend_distance`,要延长的距离,数值。 ****Extends a line by ``extend_distance``. Returns the extended ``Line``.**** `line` must be of type `Line`. `extend_distance` is the distance to extend (a numerical value). ## 交点 Intersection Points ```python def CircleInt(circle1, circle2): def LineCircleInt(line, circle): def LineInt(line1: Line, line2: Line) -> Optional[Tuple[float, float]]: def LineArcInt(line: Line, arc: Arc) -> list: ``` **函数名代表了寻找具体图形交点的功能**,例如`LineCircleInt`代表寻找`Line`和`Circle`的交点,返回点位置`[x,y,z]`,如果没有交点会返回`None`。 **The function names represent the function of finding the intersection points of specific shapes.** For example, `LineCircleInt` represents finding the intersection points of `Line` and `Circle`. Returns point position `[x,y,z]`. If there are no intersection points, it will return `None`. ## 动画 Animations ```python def VisDrawArc(scene: Scene, arc: Arc, axis=OUT, run_time=1): ``` **创建可视化(显示半径)的绘弧动画。** 直接使用即可,无需写入`self.play()`内。 `scene`,动画场景。`arc`, 已经定义好的`Arc`。`axis`,只有2个值`IN`和`OUT`,分别表示正方向还是反方向作弧。`run_time`,这是绘弧动画的时长。 **Creates a visualized arc drawing animation (with radius display).** Can be used directly without wrapping in `self.play()`. `scene` refers to the animation scene. `arc` is the predefined `Arc` object. `axis` accepts two values: `IN` (positive direction) and `OUT` (negative direction), indicating the drawing direction of the arc. `run_time` denotes the duration of the arc drawing animation.
text/markdown
Zhu Chongjing
tommy1008@dingtalk.com
null
null
null
null
[ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent" ]
[]
null
null
null
[]
[]
[]
[ "manim" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.12.7
2025-06-01T14:28:24.370743
manimtool-0.1.0.2.tar.gz
6,799
bf/1b/63692b33283a0a7ce5643a43891f965974cfc01f0f218140893f20dc7549/manimtool-0.1.0.2.tar.gz
source
sdist
null
false
b3de3fa67034986dd474f2b34b5a4388
79a8e1ff8ec03ea89f958e6446345b2014aafe1f5b436b203b1bb5a226ac90c5
bf1b63692b33283a0a7ce5643a43891f965974cfc01f0f218140893f20dc7549
null
[]
2.4
Mensajes-albiery
6.0
Un paquete para saludar y despedir
# Mensajes El paquete de mensajeria para pruebas de Albiery
text/markdown
Albiery de Leon
albiery@gmail.com
null
null
null
null
[ "Environment :: Console", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language :: Python :: 3.9", "Topic :: Utilities" ]
[]
https://www.albiery.dev
null
null
[]
[]
[]
[ "numpy>=1.23.0" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.12.1
2025-06-14T20:23:28.577531
mensajes_albiery-6.0.tar.gz
3,465
b0/2e/241839f3bca035ae1600f099c4ce8e3812b2007dd46f7bb34b451ccd2079/mensajes_albiery-6.0.tar.gz
source
sdist
null
false
ff304d9dd5f19abc6abfe2ed2926ff7d
9ee4fba78ba965fa9f824b3438a37c38be2b6da047994fa6688be4ef09a95d99
b02e241839f3bca035ae1600f099c4ce8e3812b2007dd46f7bb34b451ccd2079
null
[ "LICENSE" ]
2.3
MicroPie
0.11
An ultra micro ASGI web framework
[![Logo](https://patx.github.io/micropie/logo.png)](https://patx.github.io/micropie) ## **Introduction** **MicroPie** is a fast, lightweight, modern Python web framework that supports asynchronous web applications. Designed with **flexibility** and **simplicity** in mind, MicroPie enables you to handle high-concurrency applications with ease while allowing natural integration with external tools like Socket.IO for real-time communication. ### **Key Features** - 🔄 **Routing:** Automatic mapping of URLs to functions with support for dynamic and query parameters. - 🔒 **Sessions:** Simple, plugable, session management using cookies. - 🎨 **Templates:** Jinja2, if installed, for rendering dynamic HTML pages. - ⚙️ **Middleware:** Support for custom request middleware enabling functions like rate limiting, authentication, logging, and more. - ✨ **ASGI-Powered:** Built w/ asynchronous support for modern web servers like Uvicorn, Hypercorn, and Daphne, enabling high concurrency. - 🛠️ **Lightweight Design:** Only optional dependencies for flexibility and faster development/deployment. - ⚡ **Blazing Fast:** Check out how MicroPie compares to other popular ASGI frameworks below! ### **Useful Links** - **Homepage**: [patx.github.io/micropie](https://patx.github.io/micropie) - **API Reference**: [README.md#api-documentation](https://github.com/patx/micropie/blob/main/README.md#api-documentation) - **PyPI Page**: [pypi.org/project/MicroPie](https://pypi.org/project/MicroPie/) - **GitHub Project**: [github.com/patx/micropie](https://github.com/patx/micropie) - **File Issue/Request**: [github.com/patx/micropie/issues](https://github.com/patx/micropie/issues) - **Example Applications**: [github.com/patx/micropie/tree/main/examples](https://github.com/patx/micropie/tree/main/examples) - **Introduction Lightning Talk**: [Introduction to MicroPie on YouTube](https://www.youtube.com/watch?v=BzkscTLy1So) ## **Installing MicroPie** ### **Installation** Install MicroPie with all optional dependencies via pip: ```bash pip install micropie[standard] ``` This will install MicroPie along with `jinja2` for template rendering, and `multipart` for parsing multipart form data. If you would like to install **all** optional dependencies (everything from `standard` plus `orjson` and `uvicorn`) you can run: ```bash pip install micropie[all] ``` ### **Minimal Setup** You can also install MicroPie without ANY dependencies via pip: ```bash pip install micropie ``` For an ultra-minimalistic approach, download the standalone script: [MicroPie.py](https://raw.githubusercontent.com/patx/micropie/refs/heads/main/MicroPie.py) Place it in your project directory, and you are good to go. Note that `jinja2` must be installed separately to use the `_render_template` method and/or `multipart` for handling file data (the `_parse_multipart` method), but this *is* optional and you can use MicroPie without them. To install the optional dependencies use: ```bash pip install jinja2 multipart ``` By default MicroPie will use the `json` library from Python's standard library. If you need faster performance you can use `orjson`. MicroPie *will* use `orjson` *if installed* by default. If it is not installed, MicroPie will fallback to `json`. This means with or without `orjson` installed MicroPie will still handle JSON requests/responses the same. To install `orjson` and take advantage of it's performance, use: ```bash pip install orjson ``` ### **Install an ASGI Web Server** In order to test and deploy your apps you will need a ASGI web server like Uvicorn, Hypercorn or Daphne. Install `uvicorn` with: ```bash pip install uvicorn ``` You can also install MicroPie with `uvicorn` included using: ```bash pip install micropie[all] ``` ## **Getting Started** ### **Create Your First ASGI App** Save the following as `app.py`: ```python from MicroPie import App class MyApp(App): async def index(self): return "Welcome to MicroPie ASGI." app = MyApp() ``` Run the server with: ```bash uvicorn app:app ``` Access your app at [http://127.0.0.1:8000](http://127.0.0.1:8000). ## **Core Features** ### **1. Flexible HTTP Routing for GET Requests** MicroPie automatically maps URLs to methods within your `App` class. Routes can be defined as either synchronous or asynchronous functions, offering good flexibility. For GET requests, pass data through query strings or URL path segments, automatically mapped to method arguments. ```python class MyApp(App): async def greet(self, name="Guest"): return f"Hello, {name}!" async def hello(self): name = self.request.query_params.get("name", [None])[0] return f"Hello {name}!" ``` **Access:** - [http://127.0.0.1:8000/greet?name=Alice](http://127.0.0.1:8000/greet?name=Alice) returns `Hello, Alice!`, same as [http://127.0.0.1:8000/greet/Alice](http://127.0.0.1:8000/greet/Alice) returns `Hello, Alice!`. - [http://127.0.0.1:8000/hello/Alice](http://127.0.0.1:8000/hello/Alice) returns a `500 Internal Server Error` because it is expecting [http://127.0.0.1:8000/hello?name=Alice](http://127.0.0.1:8000/hello?name=Alice), which returns `Hello Alice!` ### **2. Flexible HTTP POST Request Handling** MicroPie also supports handling form data submitted via HTTP POST requests. Form data is automatically mapped to method arguments. It is able to handle default values and raw/JSON POST data: ```python class MyApp(App): async def submit_default_values(self, username="Anonymous"): return f"Form submitted by: {username}" async def submit_catch_all(self): username = self.request.body_params.get("username", ["Anonymous"])[0] return f"Submitted by: {username}" ``` By default, MicroPie's route handlers can accept any request method, it's up to you how to handle any incoming requests! You can check the request method (and an number of other things specific to the current request state) in the handler with`self.request.method`. You can see how to handle POST JSON data at [examples/api](https://github.com/patx/micropie/tree/main/examples/api). You can use [middlware](https://github.com/patx/micropie#8-middleware) to add explicit routing when needed. See the [middleware router](https://github.com/patx/micropie/blob/main/examples/middleware/router.py) example. ### **3. Real-Time Communication with Socket.IO** Because of its designed simplicity, MicroPie does not handle WebSockets out of the box. While the underlying ASGI interface can theoretically handle WebSocket connections, MicroPie’s routing and request-handling logic is designed primarily for HTTP. While MicroPie does not natively support WebSockets (*yet!*), you can easily integrate dedicated Websockets libraries like **Socket.IO** alongside Uvicorn to handle real-time, bidirectional communication. Check out [examples/socketio](https://github.com/patx/micropie/tree/main/examples/socketio) to see this in action. ### **4. Jinja2 Template Rendering** Dynamic HTML generation is supported via Jinja2. This happens asynchronously using Pythons `asyncio` library, so make sure to use the `async` and `await` with this method. #### **`app.py`** ```python class MyApp(App): async def index(self): return await self._render_template("index.html", title="Welcome", message="Hello from MicroPie!") ``` #### **`templates/index.html`** ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>{{ title }}</title> </head> <body> <h1>{{ message }}</h1> </body> </html> ``` ### **5. Static File Serving** Here again, like Websockets, MicroPie does not have a built in static file method. While MicroPie does not natively support static files, if you need them, you can easily integrate dedicated libraries like **ServeStatic** or **Starlette’s StaticFiles** alongside Uvicorn to handle async static file serving. Check out [examples/static_content](https://github.com/patx/micropie/tree/main/examples/static_content) to see this in action. ### **6. Streaming Responses** Support for streaming responses makes it easy to send data in chunks. ```python class MyApp(App): async def stream(self): async def generator(): for i in range(1, 6): yield f"Chunk {i}\n" return generator() ``` ### **7. Sessions and Cookies** Built-in session handling simplifies state management: ```python class MyApp(App): async def index(self): if "visits" not in self.session: self.request.session["visits"] = 1 else: self.request.session["visits"] += 1 return f"You have visited {self.request.session['visits']} times." ``` You also can use the `SessionBackend` class to create your own session backend. You can see an example of this in [examples/sessions](https://github.com/patx/micropie/tree/main/examples/sessions). ### **8. Middleware** MicroPie allows you to create pluggable middleware to hook into the request lifecycle. Take a look a trivial example using `HttpMiddleware` to send the console messages before and after the request is processed. Check out [examples/middleware](https://github.com/patx/micropie/tree/main/examples/middleware) to see more. ```python from MicroPie import App, HttpMiddleware class MiddlewareExample(HttpMiddleware): async def before_request(self, request): print("Hook before request") async def after_request(self, request, status_code, response_body, extra_headers): print("Hook after request") class Root(App): async def index(self): return "Hello, World!" app = Root() app.middlewares.append(MiddlewareExample()) ``` ### **9. Deployment** MicroPie apps can be deployed using any ASGI server. For example, using Uvicorn if our application is saved as `app.py` and our `App` subclass is assigned to the `app` variable we can run it with: ```bash uvicorn app:app --workers 4 --port 8000 ``` ## **Learn by Examples** The best way to get an idea of how MicroPie works is to see it in action! Check out the [examples folder](https://github.com/patx/micropie/tree/main/examples) for more advanced usage, including: - Template rendering - Custom HTTP request handling - File uploads - Serving static content - Session usage - JSON Requests and Responses - Socket.io Integration - Async Streaming - Middleware, including rate limiting and explicit routing - Form handling and POST requests - And more ## **Comparisons** ### **Features vs Other Popular Frameworks** | Feature | MicroPie | Flask | CherryPy | Bottle | Django | FastAPI | |---------------------|---------------|--------------|------------|--------------|--------------|-----------------| | **Routing** | Automatic | Manual | Automatic | Manual | Views | Manual | | **Template Engine** | Jinja2 (Opt.) | Jinja2 | Plugin | SimpleTpl | Django | Jinja2 | | **Middleware** | Yes | Yes | Yes | Yes | Yes | Yes | | **Session Handling**| Plugin | Extension | Built-in | Plugin | Built-in | Extension | | **Async Support** | Yes | No (Quart) | No | No | Yes | Yes | | **Built-in Server** | No | No | Yes | Yes | Yes | No | ## Benchmark Results The table below summarizes the performance of various ASGI frameworks based on a 15-second `wrk` test with 4 threads and 64 connections, measuring a simple "hello world" JSON response. [Learn More](https://gist.github.com/patx/26ad4babd662105007a6e728f182e1db). | Framework | Total Requests | Req/Sec | Transfer/Sec (MB/s) | Avg Latency (ms) | Stdev Latency (ms) | Max Latency (ms) | |-------------|----------------|-----------|---------------------|------------------|--------------------|------------------| | Blacksheep | 831,432 | 55,060.05 | 7.98 | 1.15 | 0.39 | 15.11 | | MicroPie | 791,721 | 52,685.82 | 8.09 | 1.35 | 1.09 | 21.59 | | Starlette | 779,092 | 51,930.45 | 7.03 | 1.22 | 0.39 | 17.42 | | Litestar | 610,059 | 40,401.18 | 5.47 | 1.57 | 0.63 | 33.66 | | FastAPI | 281,493 | 18,756.73 | 2.54 | 3.52 | 1.82 | 56.73 | ## **Suggestions or Feedback?** We welcome suggestions, bug reports, and pull requests! - File issues or feature requests [here](https://github.com/patx/micropie/issues). - Security issues that should not be public, email `harrisonerd [at] gmail.com`. # **API Documentation** ## Session Backend Abstraction MicroPie provides an abstraction for session backends, allowing you to define custom session storage mechanisms. ### `SessionBackend` Class #### Methods - `load(session_id: str) -> Dict[str, Any]` - Abstract method to load session data given a session ID. - `save(session_id: str, data: Dict[str, Any], timeout: int) -> None` - Abstract method to save session data. ### `InMemorySessionBackend` Class An in-memory implementation of the `SessionBackend`. #### Methods - `__init__()` - Initializes the in-memory session backend. - `load(session_id: str) -> Dict[str, Any]` - Loads session data for the given session ID. - `save(session_id: str, data: Dict[str, Any], timeout: int) -> None` - Saves session data for the given session ID. ## Middleware Abstraction MicroPie allows you to create pluggable middleware to hook into the request lifecycle. ### `HttpMiddleware` Class #### Methods - `before_request(request: Request) -> None` - Abstract method called before the request is processed. - `after_request(request: Request, status_code: int, response_body: Any, extra_headers: List[Tuple[str, str]]) -> None` - Abstract method called after the request is processed but before the final response is sent to the client. ## Request Object ### `Request` Class Represents an HTTP request in the MicroPie framework. #### Attributes - `scope`: The ASGI scope dictionary for the request. - `method`: The HTTP method of the request. - `path_params`: List of path parameters. - `query_params`: Dictionary of query parameters. - `body_params`: Dictionary of body parameters. - `get_json`: JSON request body object. - `session`: Dictionary of session data. - `files`: Dictionary of multipart data/streamed content. - `headers`: Dictionary of headers. ## Application Base ### `App` Class The main ASGI application class for handling HTTP requests in MicroPie. #### Methods - `__init__(session_backend: Optional[SessionBackend] = None) -> None` - Initializes the application with an optional session backend. - `request -> Request` - Retrieves the current request from the context variable. - `__call__(scope: Dict[str, Any], receive: Callable[[], Awaitable[Dict[str, Any]]], send: Callable[[Dict[str, Any]], Awaitable[None]]) -> None` - ASGI callable interface for the server. Checks `scope` type. - `_asgi_app_http(scope: Dict[str, Any], receive: Callable[[], Awaitable[Dict[str, Any]]], send: Callable[[Dict[str, Any]], Awaitable[None]]) -> None` - ASGI application entry point for handling HTTP requests. - `request(self) -> Request` - Accessor for the current request object. - Returns the current request from the context variable. - `_parse_cookies(cookie_header: str) -> Dict[str, str]` - Parses the Cookie header and returns a dictionary of cookie names and values. - `_parse_multipart(reader: asyncio.StreamReader, boundary: bytes) -> Tuple[Dict[str, List[str]], Dict[str, Dict[str, Any]]]` - Asynchronously parses multipart/form-data from the given reader using the specified boundary. Returns a tuple of two dictionaries: `form_data` (text fields as key-value pairs) and `files` (file fields with metadata). Each file entry in `files` contains: - `filename`: The original filename of the uploaded file. - `content_type`: The MIME type of the file (defaults to `application/octet-stream`). - `content`: An `asyncio.Queue` containing chunks of file data as bytes, with a `None` sentinel signaling the end of the stream. - Handlers can consume the file data by iterating over the queue (e.g., using `await queue.get()`). - *Requires:* `multipart` - `_send_response(send: Callable[[Dict[str, Any]], Awaitable[None]], status_code: int, body: Any, extra_headers: Optional[List[Tuple[str, str]]] = None) -> None` - Sends an HTTP response using the ASGI send callable. - `_redirect(location: str) -> Tuple[int, str]` - Generates an HTTP redirect response. - `_render_template(name: str, **kwargs: Any) -> str` - Renders a template asynchronously using Jinja2. - *Requires*: `jinja2` The `App` class is the main entry point for creating MicroPie applications. It implements the ASGI interface and handles HTTP requests. ## Response Formats Handlers can return responses in the following formats: 1. String or bytes or JSON 2. Tuple of (status_code, body) 3. Tuple of (status_code, body, headers) 4. Async or sync generator for streaming responses ## Error Handling MicroPie provides built-in error handling for common HTTP status codes: - `404 Not Found`: Automatically returned for non-existent routes - `400 Bad Request`: Returned for missing required parameters - `500 Internal Server Error`: Returned for unhandled exceptions Custom error handling can be implemented through middleware. ---- © 2025 Harrison Erd
text/markdown
null
Harrison Erd <harrisonerd@gmail.com>
null
null
null
micropie, asgi, microframework, http
[ "Framework :: AsyncIO", "Environment :: Web Environment", "Topic :: Internet :: WWW/HTTP", "Topic :: Software Development :: Libraries :: Application Frameworks" ]
[]
null
null
null
[]
[ "MicroPie" ]
[]
[ "jinja2; extra == \"all\"", "multipart; extra == \"all\"", "orjson; extra == \"all\"", "uvicorn; extra == \"all\"", "jinja2; extra == \"standard\"", "multipart; extra == \"standard\"" ]
[]
[]
[]
[ "Homepage, https://patx.github.io/micropie", "Repository, https://github.com/patx/micropie" ]
python-requests/2.32.3
2025-06-10T09:04:59.074819
micropie-0.11-py2.py3-none-any.whl
14,284
a1/96/bf19f1d1efbd0953f7239b788c5d673555db0716676ae487fa84f2d4d658/micropie-0.11-py2.py3-none-any.whl
py2.py3
bdist_wheel
null
false
c445ded2fbce0d321cfa805b55742e7c
4ce97c009c63429603b749ef62c810e623b3ca03805a8fe703b462950ed44616
a196bf19f1d1efbd0953f7239b788c5d673555db0716676ae487fa84f2d4d658
null
[]
2.4
MontagePy
2.3.0
Montage toolkit for reprojecting, mosaicking, and displaying astronomical images.
Montage: Astronomical Image Mosaics, Examination, and Visualization =================================================================== Montage 7.0 adds two major capabilities. The first is a set of tools for building HiPS map. HiPS (Hierarchical Progressive Surveys) is a hierarchical tiling mechanism which allows one to access, visualize and browse seamlessly image data and in particular large-scale, high-resolution surveys. HiPS construction through Montage consists of building large-scale mosaics using pre-existing Montage modules (reprojection, background matching, and coaddition) with a HiPS-specific projection (HPX). The image hierarchy just requires repetative shrinking of higher resolution images by factors of two. Finally, the tiles to be served are simply 512x512 cutouts from these mosaics. For high-resolution data, this process can benefit greatly from massive parallelization and this can be achieved in a number of ways. In particular, tools have been developed to streamline this on cloud platforms like AWS. So at arcminute scale all-sky processing can adequately be done on a single desktop machine and at arcsecond scale the same tools can create a set of jobs that can be submitted to run on a cloud in a few days (or a few tens of processors if one has that in-house). The second addition to Montage is a complete set of modern procedures for building Python binary extension Montage wheels for Linux and Mac systems (and extensible to some others). This is very much a moving target and we are building as many of these as we can and pushing them to PyPI but if someone want to extend Montage for their own use they can use the same infrastructure to build custom wheels as well. -------------- Montage (http://montage.ipac.caltech.edu) is an Open Source toolkit, distributed with a BSD 3-clause license, for assembling Flexible Image Transport System (FITS) images into mosaics according to the user's custom specifications of coordinates, projection, spatial sampling, rotation and background matching. The toolkit contains utilities for reprojecting and background matching images, assembling them into mosaics, visualizing the results, and discovering, analyzing and understanding image metadata from archives or the user's images. Montage is written in ANSI-C and is portable across all common Unix-like platforms, including Linux, Solaris, Mac OSX and Cygwin on Windows. The package provides both stand-alone executables and the same functionality in library form. It has been cross-compiled to provide native Windows executables and packaged as a binary Python extension (available via "pip install MontagePy"). The distribution contains all libraries needed to build the toolkit from a single simple "make" command, including CFITSIO and the WCS library (which has been extended to support HEALPix and World-Wide Telescope TOAST projections. The toolkit is in wide use in astronomy to support research projects, and to support pipeline development, product generation and image visualization for major projects and missions; e.g. Spitzer Space Telescope, Herschel, Kepler, AKARI and others. Montage is used as an exemplar application by the computer science community in developing next-generation cyberinfrastructure, especially workflow frameworks on distributed platforms, including multiple clouds. Montage provides multiple reprojection algorithms optimized for different needs, maximizing alternately flux conservation, range of projections, and speed. The visualization module supports full (three-color) display of FITS images and publication quality overlays of catalogs (scaled symbols), image metadata, and coordinate grids. It fits in equally well in pipelines or as the basis for interactive image exploration and there is Python support for the latter (It has also been used in web/Javascript applications). We are in the process of adding automated regression testing using Jenkins. At the moment, this only includes a couple of dummy tests on a Jenkins server that we maintain specifically for the Montage project). Montage was funded from 2002 to 2005 by the National Aeronautics and Space Administration's Earth Science Technology Office, Computation Technologies Project, under Cooperative Agreement Number NCC5-626 between NASA and the California Institute of Technology. The Montage distribution includes an adaptation of the MOPEX algorithm developed at the Spitzer Science Center. Montage has also been funded by the National Science Foundation under Award Number NSF ACI-1440620.
text/markdown
null
John Good <jcg@ipac.caltech.edu>
null
null
Copyright (c) 2017 California Institute of Technology, Pasadena, California. Based on Cooperative Agreement Number NCC5-626 between NASA and the California Institute of Technology. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions of this BSD 3-clause license are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
astronomy, astronomical, image, reprojection, mosaic, visualization
[]
[]
null
null
null
[]
[]
[]
[ "requests" ]
[]
[]
[]
[ "Homepage, https://github.com/Caltech-IPAC/Montage" ]
twine/6.1.0 CPython/3.12.7
2025-06-05T02:34:26.306335
montagepy-2.3.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
6,429,356
68/0d/4bf0c97151c3c5dde45627973c67ce55573714e7fcb0e0a64f1f8fcbf4ee/montagepy-2.3.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
cp313
bdist_wheel
null
false
524caf5ae8c2bb547bce9dc1030d0342
a39d5742cfc0f7fdd468f409135c223d00d1ab62431e02f568601bd28c2c6469
680d4bf0c97151c3c5dde45627973c67ce55573714e7fcb0e0a64f1f8fcbf4ee
null
[ "LICENSE" ]
2.4
MontagePy
2.3.0
Montage toolkit for reprojecting, mosaicking, and displaying astronomical images.
Montage: Astronomical Image Mosaics, Examination, and Visualization =================================================================== Montage 7.0 adds two major capabilities. The first is a set of tools for building HiPS map. HiPS (Hierarchical Progressive Surveys) is a hierarchical tiling mechanism which allows one to access, visualize and browse seamlessly image data and in particular large-scale, high-resolution surveys. HiPS construction through Montage consists of building large-scale mosaics using pre-existing Montage modules (reprojection, background matching, and coaddition) with a HiPS-specific projection (HPX). The image hierarchy just requires repetative shrinking of higher resolution images by factors of two. Finally, the tiles to be served are simply 512x512 cutouts from these mosaics. For high-resolution data, this process can benefit greatly from massive parallelization and this can be achieved in a number of ways. In particular, tools have been developed to streamline this on cloud platforms like AWS. So at arcminute scale all-sky processing can adequately be done on a single desktop machine and at arcsecond scale the same tools can create a set of jobs that can be submitted to run on a cloud in a few days (or a few tens of processors if one has that in-house). The second addition to Montage is a complete set of modern procedures for building Python binary extension Montage wheels for Linux and Mac systems (and extensible to some others). This is very much a moving target and we are building as many of these as we can and pushing them to PyPI but if someone want to extend Montage for their own use they can use the same infrastructure to build custom wheels as well. -------------- Montage (http://montage.ipac.caltech.edu) is an Open Source toolkit, distributed with a BSD 3-clause license, for assembling Flexible Image Transport System (FITS) images into mosaics according to the user's custom specifications of coordinates, projection, spatial sampling, rotation and background matching. The toolkit contains utilities for reprojecting and background matching images, assembling them into mosaics, visualizing the results, and discovering, analyzing and understanding image metadata from archives or the user's images. Montage is written in ANSI-C and is portable across all common Unix-like platforms, including Linux, Solaris, Mac OSX and Cygwin on Windows. The package provides both stand-alone executables and the same functionality in library form. It has been cross-compiled to provide native Windows executables and packaged as a binary Python extension (available via "pip install MontagePy"). The distribution contains all libraries needed to build the toolkit from a single simple "make" command, including CFITSIO and the WCS library (which has been extended to support HEALPix and World-Wide Telescope TOAST projections. The toolkit is in wide use in astronomy to support research projects, and to support pipeline development, product generation and image visualization for major projects and missions; e.g. Spitzer Space Telescope, Herschel, Kepler, AKARI and others. Montage is used as an exemplar application by the computer science community in developing next-generation cyberinfrastructure, especially workflow frameworks on distributed platforms, including multiple clouds. Montage provides multiple reprojection algorithms optimized for different needs, maximizing alternately flux conservation, range of projections, and speed. The visualization module supports full (three-color) display of FITS images and publication quality overlays of catalogs (scaled symbols), image metadata, and coordinate grids. It fits in equally well in pipelines or as the basis for interactive image exploration and there is Python support for the latter (It has also been used in web/Javascript applications). We are in the process of adding automated regression testing using Jenkins. At the moment, this only includes a couple of dummy tests on a Jenkins server that we maintain specifically for the Montage project). Montage was funded from 2002 to 2005 by the National Aeronautics and Space Administration's Earth Science Technology Office, Computation Technologies Project, under Cooperative Agreement Number NCC5-626 between NASA and the California Institute of Technology. The Montage distribution includes an adaptation of the MOPEX algorithm developed at the Spitzer Science Center. Montage has also been funded by the National Science Foundation under Award Number NSF ACI-1440620.
text/markdown
null
John Good <jcg@ipac.caltech.edu>
null
null
Copyright (c) 2017 California Institute of Technology, Pasadena, California. Based on Cooperative Agreement Number NCC5-626 between NASA and the California Institute of Technology. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions of this BSD 3-clause license are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
astronomy, astronomical, image, reprojection, mosaic, visualization
[]
[]
null
null
null
[]
[]
[]
[ "requests" ]
[]
[]
[]
[ "Homepage, https://github.com/Caltech-IPAC/Montage" ]
twine/6.1.0 CPython/3.12.7
2025-06-05T01:16:09.346846
montagepy-2.3.0-cp312-cp312-musllinux_1_2_x86_64.whl
670,840
88/87/ec80ced75683a6eed41cc32c1319c8bd49ee9794e7fe4df9eae425bea9e8/montagepy-2.3.0-cp312-cp312-musllinux_1_2_x86_64.whl
cp312
bdist_wheel
null
false
3f0e7693eaafc560dd1f4a03f49fe7c7
df503e84e0c32aa23184b5114dfed4541b2feebcf9a3a4a394abf72bf06423ba
8887ec80ced75683a6eed41cc32c1319c8bd49ee9794e7fe4df9eae425bea9e8
null
[ "LICENSE" ]
2.1
Mzhtools
5.1.1
Python helper tools
python for mytools
text/markdown
Author's name
191891173@qq.com
null
null
null
null
[ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent" ]
[]
https://pypi.org/project/Mzhtools/
null
null
[]
[]
[]
[]
[]
[]
[]
[]
twine/6.1.0 CPython/3.11.9
2025-06-05T01:07:17.378754
Mzhtools-5.1.1-py3-none-any.whl
9,254
e0/f8/c713c98509790d2334e5ba9f82e83b0cbcc7717dbd358fe5b8cdcf913024/Mzhtools-5.1.1-py3-none-any.whl
py3
bdist_wheel
null
false
4097677916d4ae5625d278edb8e55a78
51bfc34cf198ebdf9f7e27e64e0f30d607532aa17cf6842e2bd2f2e4b1735dbf
e0f8c713c98509790d2334e5ba9f82e83b0cbcc7717dbd358fe5b8cdcf913024
null
[]
2.4
OpenMM
8.3.0rc2
Python wrapper for OpenMM (a C++ MD package)
OpenMM is a toolkit for molecular simulation. It can be used either as a stand-alone application for running simulations, or as a library you call from your own code. It provides a combination of extreme flexibility (through custom forces and integrators), openness, and high performance (especially on recent GPUs) that make it truly unique among simulation codes.
null
Peter Eastman
null
null
null
Python Software Foundation License (BSD-like)
null
[]
[ "Linux" ]
https://openmm.org
https://openmm.org
null
[]
[]
[]
[ "numpy" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.12.3
2025-06-07T21:23:06.911444
openmm-8.3.0rc2-cp312-cp312-macosx_10_12_x86_64.whl
12,913,286
a6/44/8b9c88b572c8ec904d5f4b43000901fecd091bdd8be294775f40d435a547/openmm-8.3.0rc2-cp312-cp312-macosx_10_12_x86_64.whl
cp312
bdist_wheel
null
false
7640a41963cf560d8ac4237137bf1283
39db549b588327c31cf441b938a2671de17b8f6b4f937e35cdece1132dcf90d8
a6448b9c88b572c8ec904d5f4b43000901fecd091bdd8be294775f40d435a547
null
[]
2.4
OpenMM
8.3.0rc2
Python wrapper for OpenMM (a C++ MD package)
OpenMM is a toolkit for molecular simulation. It can be used either as a stand-alone application for running simulations, or as a library you call from your own code. It provides a combination of extreme flexibility (through custom forces and integrators), openness, and high performance (especially on recent GPUs) that make it truly unique among simulation codes.
null
Peter Eastman
null
null
null
Python Software Foundation License (BSD-like)
null
[]
[ "Linux" ]
https://openmm.org
https://openmm.org
null
[]
[]
[]
[ "numpy", "OpenMM-CUDA-12; extra == \"cuda12\"", "OpenMM-HIP-6; extra == \"hip6\"" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.12.3
2025-06-07T21:23:02.333375
openmm-8.3.0rc2-cp311-cp311-win_amd64.whl
12,815,210
50/f8/3ae4e9d7d9d92b053a4d4c3e532e422e6089013c0a768c84d9830b3b8b79/openmm-8.3.0rc2-cp311-cp311-win_amd64.whl
cp311
bdist_wheel
null
false
60f73936e28fc87d161224675afb4be6
7a69178b9f9b0d5b643039c8a39c65b1bfaea3b193a2ba5115584bbe6f946287
50f83ae4e9d7d9d92b053a4d4c3e532e422e6089013c0a768c84d9830b3b8b79
null
[]
2.4
PPIFold
0.5.3
Automatic pipeline using AlphaPulldown to predict PPI and homo-oligomer
null
null
Quentin Rouger
quentin.rouger@univ-rennes.fr
null
null
GPL-3.0 license
null
[]
[]
https://github.com/Qrouger/PPIFold
null
null
[]
[]
[]
[ "alphapulldown", "seaborn", "matplotlib", "scipy" ]
[]
[]
[]
[]
twine/6.0.1 CPython/3.12.0
2025-06-19T09:44:14.628902
ppifold-0.5.3-py3-none-any.whl
33,612
bd/6d/c43e695f6dbbe0f09fd0a55ce52871179749d2bc6217dc0f96df3cfa304c/ppifold-0.5.3-py3-none-any.whl
py3
bdist_wheel
null
false
8b81b50676110c67c21105df985a16f7
1bb83d7bf95668b74d176326c41a8543bd8ea9688a5f2026899a0056cc7811ac
bd6dc43e695f6dbbe0f09fd0a55ce52871179749d2bc6217dc0f96df3cfa304c
null
[]
2.4
Products.PDBDebugMode
2.1
Post-mortem debugging on Zope exceptions
=============================================== Products.PDBDebugMode =============================================== Enable various PDB debugging when debug-mode=on ----------------------------------------------- When Zope is running in debug mode this product hooks PDB debugging into various parts of a Zope instance. Some additional Plone specific hooks are also included. Requirements ------------ This version of PDBDebugMode has been tested with Zope4 and Plone 5.2 in Python 2.7, 3.6 and 3.7 For Zope 2 (until Plone 5.1) please use `Products.PDBDebugMode = 1.3`. If ipdb (http://pypi.python.org/pypi/ipdb) is available, it will use that instead of standard pdb. Its recommended that you use an editor or IDE that can cooperate with pdb. Emacs for example, will display the corresponding lines of the source file alongside the pdb prompt. Remember that this product does nothing unless zope is being run with debug-mode=on such as with "./bin/instance fg" Post-Mortem Debugging --------------------- To provide for better investigation of errors, any error or exception logged with the python logging module will invoke pdb.post_mortem() if a traceback can be retrieved and set_trace will be invoked otherwise. Since the Zope error_log exception handler uses the logging module when logging errors, this provides for post mortem debugging of Zope errors. It is often useful, for example, to remove NotFound or Unauthorized from the ignored exception in error_log and then investigate such errors with PDB. Runcall Requests ---------------- Any request that has the key 'pdb_runcall' will call the result of the request traversal in the debugger thus allowing for stepping through the resulting execution. To debug a POST or any other request which might be tricky to insert the 'pdb_runcall' key into, use '?toggle_runcall=1' at the end of a URL immediately preceding the POST to set a 'pdb_runcall' cookie which will then invoke the pdb.runcall when the POST is submitted. Use '?toggle_runcall=1' at the end of a URL to clear the cookie. Remember that the cookie will be set at the level in the hierarchy that it was set. Debug View ---------- Additionaly, a view named 'pdb' is registered for all objects that will simply raise an exception leaving you with the current context to inspect. Use it for example by calling http://localhost:8080/Plone/foo/@@pdb. Allow Import of pdb ------------------- Import of the pdb module is also allowed in unprotected code such as python scripts. Changelog ========= 2.1 (2025-06-19) ---------------- Bug fixes: - Include dependencies in zcml to fix use in a pip-based install. [pbauer] 2.0 (2019-04-01) ---------------- New features: - Add log-meesage on startup. [pbauer] Bug fixes: - Remove post_mortem in tests since that feature is now a part of zope.testrunner and is unneeded here. Fixes https://github.com/plone/Products.CMFPlone/issues/2803 [pbauer] - Remove traces of support for Zope 2. [pbauer] 1.4 (2019-03-02) ---------------- Breaking changes: * Make compatible with Zope4 and drop support for Zope 2. [pbauer] New features: * Add compatibility for Python 3 and 2. [frapell] * Improve debug mode detection, provide a ZCML feature, and enable when running tests with '-D'. [rpatterson] * Add zope.testrunner support. [rpatterson] * Add some missing iPython support for runcall and broken_delete. [rpatterson] Bug fixes: * Apparently the ipdb support only works with ipdb 0.3 or greater. Added an "ipdb" extra for this requirement. [rossp] * Fix ipdb import in zcatalog.py. [pabo] 1.3 - 2011-01-14 ---------------- * Ignore invalid GenericSetup handlers. [rossp] * Use ipdb when available. [neaj] 1.2 - 2011-01-07 ---------------- * Add some zopectl scripts I use when evaluating upgrades. [rossp] * Better handling of exceptions while checking error matching. [rossp] * Fix a problem with doing post_mortem debugging of error_log ignored exceptions. [rossp] * Fix handling of socket errors * Fix handling of SiteErrorLog tracebacks * Fix handling of exc_info logging arg 1.1 - 2009-04-18 ---------------- * Fix a bug due to a change in monkeypatcher 1.0 - 2009-04-10 ---------------- * Add collective.monkeypatcher as a requirement [kdeldycke] 2009-04-09 * Fix some recursion errors 0.3 - 2009-04-08 ---------------- * Use collective.monkeypatcher to move all patches into ZCML * Fully deprecate the Zope exception handler in favor of the logging hook since the Zope exception handler uses the logging module anyways and more can be done by hooking at that level. * Handle failed matches in Products.PDBDebugMode.pdblogging more gracefully * More flexible log matching. Change Products.PDBDebugMode.pdblogging.ignore_regexes to ignore_matchers and accept any callable. 0.2 - 2008-05-15 ---------------- * Eggified 0.1 - 2006-03-11 ---------------- * Initial release
null
Ross Patterson
me@rpatterson.net
null
null
GPL
null
[ "Environment :: Web Environment", "Topic :: Software Development :: Libraries :: Python Modules", "Framework :: Plone", "Framework :: Plone :: 5.2", "Framework :: Plone :: 6.0", "Framework :: Zope :: 4", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language ::...
[]
https://github.com/collective/Products.PDBDebugMode
null
null
[]
[]
[]
[ "setuptools", "collective.monkeypatcher", "six", "ipdb>=0.3; extra == \"ipdb\"", "zope.testrunner; extra == \"zodb\"", "zope.testing; extra == \"zodb-testing\"" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.3
2025-06-18T22:42:32.258019
products_pdbdebugmode-2.1.tar.gz
18,652
af/f7/37090d4565c4e6b85b6b013cc8773ccd198a361e65bd6f62c780b8b334e5/products_pdbdebugmode-2.1.tar.gz
source
sdist
null
false
17346fe9ebe7addcc380ec894a8bd042
b833f190f3bed66664c5f765f9c551e8b3d942687b6a92f03873133096227b3b
aff737090d4565c4e6b85b6b013cc8773ccd198a361e65bd6f62c780b8b334e5
null
[]