[{"name": "zopfli", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nPYZOPFLI\nUSAGE\nTODO\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\nPYZOPFLI\ncPython bindings for\nzopfli.\nIt requires Python 3.7 or greater.\n\nUSAGE\npyzopfli is a straight forward wrapper around zopfli's ZlibCompress method.\nfrom zopfli.zlib import compress\nfrom zlib import decompress\ns = 'Hello World'\nprint decompress(compress(s))\n\npyzopfli also wraps GzipCompress, but the API point does not try to\nmimic the gzip module.\nfrom zopfli.gzip import compress\nfrom StringIO import StringIO\nfrom gzip import GzipFile\nprint GzipFile(fileobj=StringIO(compress(\"Hello World!\"))).read()\n\nBoth zopfli.zlib.compress and zopfli.gzip.compress support the following\nkeyword arguments. All values should be integers; boolean parmaters are\ntreated as expected, 0 and >0 as false and true.\n\nverbose dumps zopfli debugging data to stderr\nnumiterations Maximum amount of times to rerun forward and backward\npass to optimize LZ77 compression cost. Good values: 10, 15 for small\nfiles, 5 for files over several MB in size or it will be too slow.\nblocksplitting If true, splits the data in multiple deflate blocks\nwith optimal choice for the block boundaries. Block splitting gives\nbetter compression. Default: true (1).\nblocksplittinglast If true, chooses the optimal block split points\nonly after doing the iterative LZ77 compression. If false, chooses\nthe block split points first, then does iterative LZ77 on each\nindividual block. Depending on the file, either first or last gives\nthe best compression. Default: false (0).\nblocksplittingmax Maximum amount of blocks to split into (0 for\nunlimited, but this can give extreme results that hurt compression on\nsome files). Default value: 15.\n\n\nTODO\n\nStop reading the entire file into memory and support streaming\nMonkey patch zlib and gzip so code with an overly tight binding can\nbe easily modified to use zopfli.\n\n\n\n", "description": "Zopfli compression algorithm for higher deflate or zlib compression.", "category": "Compression"}, {"name": "zipp", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nCompatibility\nUsage\nFor Enterprise\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nA pathlib-compatible Zipfile object wrapper. Official backport of the standard library\nPath object.\n\nCompatibility\nNew features are introduced in this third-party library and later merged\ninto CPython. The following table indicates which versions of this library\nwere contributed to different versions in the standard library:\n\n\nzipp\nstdlib\n\n\n\n3.15\n3.12\n\n3.5\n3.11\n\n3.2\n3.10\n\n3.3 ??\n3.9\n\n1.0\n3.8\n\n\n\n\nUsage\nUse zipp.Path in place of zipfile.Path on any Python.\n\nFor Enterprise\nAvailable as part of the Tidelift Subscription.\nThis project and the maintainers of thousands of other packages are working with Tidelift to deliver one enterprise subscription that covers all of the open source you use.\nLearn more.\n\n\n", "description": "Backport of pathlib-compatible object wrapper for zip files."}, {"name": "yarl", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nyarl\nIntroduction\nInstallation\nDependencies\nAPI documentation\nWhy isn't boolean supported by the URL query API?\nComparison with other URL libraries\nSource code\nDiscussion list\nAuthors and License\n\n\n\n\n\nREADME.rst\n\n\n\n\nyarl\nThe module provides handy URL class for URL parsing and changing.\n\n\n\n\n\n\n\n\n\n\n\n\nIntroduction\nUrl is constructed from str:\n>>> from yarl import URL\n>>> url = URL('https://www.python.org/~guido?arg=1#frag')\n>>> url\nURL('https://www.python.org/~guido?arg=1#frag')\nAll url parts: scheme, user, password, host, port, path,\nquery and fragment are accessible by properties:\n>>> url.scheme\n'https'\n>>> url.host\n'www.python.org'\n>>> url.path\n'/~guido'\n>>> url.query_string\n'arg=1'\n>>> url.query\n<MultiDictProxy('arg': '1')>\n>>> url.fragment\n'frag'\nAll url manipulations produce a new url object:\n>>> url = URL('https://www.python.org')\n>>> url / 'foo' / 'bar'\nURL('https://www.python.org/foo/bar')\n>>> url / 'foo' % {'bar': 'baz'}\nURL('https://www.python.org/foo?bar=baz')\nStrings passed to constructor and modification methods are\nautomatically encoded giving canonical representation as result:\n>>> url = URL('https://www.python.org/\u043f\u0443\u0442\u044c')\n>>> url\nURL('https://www.python.org/%D0%BF%D1%83%D1%82%D1%8C')\nRegular properties are percent-decoded, use raw_ versions for\ngetting encoded strings:\n>>> url.path\n'/\u043f\u0443\u0442\u044c'\n\n>>> url.raw_path\n'/%D0%BF%D1%83%D1%82%D1%8C'\nHuman readable representation of URL is available as .human_repr():\n>>> url.human_repr()\n'https://www.python.org/\u043f\u0443\u0442\u044c'\nFor full documentation please read https://yarl.readthedocs.org.\n\nInstallation\n$ pip install yarl\n\nThe library is Python 3 only!\nPyPI contains binary wheels for Linux, Windows and MacOS.  If you want to install\nyarl on another operating system (like Alpine Linux, which is not\nmanylinux-compliant because of the missing glibc and therefore, cannot be\nused with our wheels) the the tarball will be used to compile the library from\nthe source code. It requires a C compiler and and Python headers installed.\nTo skip the compilation you must explicitly opt-in by setting the YARL_NO_EXTENSIONS\nenvironment variable to a non-empty value, e.g.:\n$ YARL_NO_EXTENSIONS=1 pip install yarl\nPlease note that the pure-Python (uncompiled) version is much slower. However,\nPyPy always uses a pure-Python implementation, and, as such, it is unaffected\nby this variable.\n\nDependencies\nYARL requires multidict library.\n\nAPI documentation\nThe documentation is located at https://yarl.readthedocs.org\n\nWhy isn't boolean supported by the URL query API?\nThere is no standard for boolean representation of boolean values.\nSome systems prefer true/false, others like yes/no, on/off,\nY/N, 1/0, etc.\nyarl cannot make an unambiguous decision on how to serialize bool values because\nit is specific to how the end-user's application is built and would be different for\ndifferent apps.  The library doesn't accept booleans in the API; a user should convert\nbools into strings using own preferred translation protocol.\n\nComparison with other URL libraries\n\nfurl (https://pypi.python.org/pypi/furl)\nThe library has rich functionality but the furl object is mutable.\nI'm afraid to pass this object into foreign code: who knows if the\ncode will modify my url in a terrible way while I just want to send URL\nwith handy helpers for accessing URL properties.\nfurl has other non-obvious tricky things but the main objection\nis mutability.\n\nURLObject (https://pypi.python.org/pypi/URLObject)\nURLObject is immutable, that's pretty good.\nEvery URL change generates a new URL object.\nBut the library doesn't do any decode/encode transformations leaving the\nend user to cope with these gory details.\n\n\n\nSource code\nThe project is hosted on GitHub\nPlease file an issue on the bug tracker if you have found a bug\nor have some suggestion in order to improve the library.\nThe library uses Azure Pipelines for\nContinuous Integration.\n\nDiscussion list\naio-libs google group: https://groups.google.com/forum/#!forum/aio-libs\nFeel free to post your questions and ideas here.\n\nAuthors and License\nThe yarl package is written by Andrew Svetlov.\nIt's Apache 2 licensed and freely available.\n\n\n", "description": "URL parsing and manipulation."}, {"name": "xml-python", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n"}, {"name": "XlsxWriter", "readme": "\n\n\n\nREADME.rst\n\n\n\n\nXlsxWriter\nXlsxWriter is a Python module for writing files in the Excel 2007+ XLSX\nfile format.\nXlsxWriter can be used to write text, numbers, formulas and hyperlinks to\nmultiple worksheets and it supports features such as formatting and many more,\nincluding:\n\n100% compatible Excel XLSX files.\nFull formatting.\nMerged cells.\nDefined names.\nCharts.\nAutofilters.\nData validation and drop down lists.\nConditional formatting.\nWorksheet PNG/JPEG/GIF/BMP/WMF/EMF images.\nRich multi-format strings.\nCell comments.\nIntegration with Pandas and Polars.\nTextboxes.\nSupport for adding Macros.\nMemory optimization mode for writing large files.\n\nIt supports Python 3.4+ and PyPy3 and uses standard libraries only.\nHere is a simple example:\nimport xlsxwriter\n\n\n# Create an new Excel file and add a worksheet.\nworkbook = xlsxwriter.Workbook('demo.xlsx')\nworksheet = workbook.add_worksheet()\n\n# Widen the first column to make the text clearer.\nworksheet.set_column('A:A', 20)\n\n# Add a bold format to use to highlight cells.\nbold = workbook.add_format({'bold': True})\n\n# Write some simple text.\nworksheet.write('A1', 'Hello')\n\n# Text with formatting.\nworksheet.write('A2', 'World', bold)\n\n# Write some numbers, with row/column notation.\nworksheet.write(2, 0, 123)\nworksheet.write(3, 0, 123.456)\n\n# Insert an image.\nworksheet.insert_image('B5', 'logo.png')\n\nworkbook.close()\n\nSee the full documentation at: https://xlsxwriter.readthedocs.io\nRelease notes: https://xlsxwriter.readthedocs.io/changes.html\n\n\n", "description": "Create Excel XLSX files."}, {"name": "xlrd", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n", "description": "Reading data and formatting information from older Excel files.", "category": "Excel"}, {"name": "xgboost", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n  eXtreme Gradient Boosting\nLicense\nContribute to XGBoost\nReference\nSponsors\nOpen Source Collective sponsors\nSponsors\nBackers\n\n\n\n\n\nREADME.md\n\n\n\n\n  eXtreme Gradient Boosting\n\n\n\n\n\n\n\n\n\n\nCommunity |\nDocumentation |\nResources |\nContributors |\nRelease Notes\nXGBoost is an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable.\nIt implements machine learning algorithms under the Gradient Boosting framework.\nXGBoost provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way.\nThe same code runs on major distributed environment (Kubernetes, Hadoop, SGE, Dask, Spark, PySpark) and can solve problems beyond billions of examples.\nLicense\n\u00a9 Contributors, 2021. Licensed under an Apache-2 license.\nContribute to XGBoost\nXGBoost has been developed and used by a group of active community members. Your help is very valuable to make the package better for everyone.\nCheckout the Community Page.\nReference\n\nTianqi Chen and Carlos Guestrin. XGBoost: A Scalable Tree Boosting System. In 22nd SIGKDD Conference on Knowledge Discovery and Data Mining, 2016\nXGBoost originates from research project at University of Washington.\n\nSponsors\nBecome a sponsor and get a logo here. See details at Sponsoring the XGBoost Project. The funds are used to defray the cost of continuous integration and testing infrastructure (https://xgboost-ci.net).\nOpen Source Collective sponsors\n \nSponsors\n[Become a sponsor]\n\n\nBackers\n[Become a backer]\n\n\n\n", "description": "Gradient boosting library."}, {"name": "xarray", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nxarray: N-D labeled arrays and datasets\nWhy xarray?\nDocumentation\nContributing\nGet in touch\nNumFOCUS\nHistory\nContributors\nLicense\n\n\n\n\n\nREADME.md\n\n\n\n\nxarray: N-D labeled arrays and datasets\n\n\n\n\n\n\n\n\n\n\nxarray (pronounced \"ex-array\", formerly known as xray) is an open source project and Python\npackage that makes working with labelled multi-dimensional arrays\nsimple, efficient, and fun!\nXarray introduces labels in the form of dimensions, coordinates and\nattributes on top of raw NumPy-like arrays,\nwhich allows for a more intuitive, more concise, and less error-prone\ndeveloper experience. The package includes a large and growing library\nof domain-agnostic functions for advanced analytics and visualization\nwith these data structures.\nXarray was inspired by and borrows heavily from\npandas, the popular data analysis package\nfocused on labelled tabular data. It is particularly tailored to working\nwith netCDF files, which\nwere the source of xarray's data model, and integrates tightly with\ndask for parallel computing.\nWhy xarray?\nMulti-dimensional (a.k.a. N-dimensional, ND) arrays (sometimes called\n\"tensors\") are an essential part of computational science. They are\nencountered in a wide range of fields, including physics, astronomy,\ngeoscience, bioinformatics, engineering, finance, and deep learning. In\nPython, NumPy provides the fundamental data\nstructure and API for working with raw ND arrays. However, real-world\ndatasets are usually more than just raw numbers; they have labels which\nencode information about how the array values map to locations in space,\ntime, etc.\nXarray doesn't just keep track of labels on arrays -- it uses them to\nprovide a powerful and concise interface. For example:\n\nApply operations over dimensions by name: x.sum('time').\nSelect values by label instead of integer location:\nx.loc['2014-01-01'] or x.sel(time='2014-01-01').\nMathematical operations (e.g., x - y) vectorize across multiple\ndimensions (array broadcasting) based on dimension names, not shape.\nFlexible split-apply-combine operations with groupby:\nx.groupby('time.dayofyear').mean().\nDatabase like alignment based on coordinate labels that smoothly\nhandles missing values: x, y = xr.align(x, y, join='outer').\nKeep track of arbitrary metadata in the form of a Python dictionary:\nx.attrs.\n\nDocumentation\nLearn more about xarray in its official documentation at\nhttps://docs.xarray.dev/.\nTry out an interactive Jupyter\nnotebook.\nContributing\nYou can find information about contributing to xarray at our\nContributing\npage.\nGet in touch\n\nAsk usage questions (\"How do I?\") on\nGitHub Discussions.\nReport bugs, suggest features or view the source code on\nGitHub.\nFor less well defined questions or ideas, or to announce other\nprojects of interest to xarray users, use the mailing\nlist.\n\nNumFOCUS\n\nXarray is a fiscally sponsored project of\nNumFOCUS, a nonprofit dedicated to supporting\nthe open source scientific computing community. If you like Xarray and\nwant to support our mission, please consider making a\ndonation to support\nour efforts.\nHistory\nXarray is an evolution of an internal tool developed at The Climate\nCorporation. It was originally written by Climate\nCorp researchers Stephan Hoyer, Alex Kleeman and Eugene Brevdo and was\nreleased as open source in May 2014. The project was renamed from\n\"xray\" in January 2016. Xarray became a fiscally sponsored project of\nNumFOCUS in August 2018.\nContributors\nThanks to our many contributors!\n\nLicense\nCopyright 2014-2023, xarray Developers\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You may\nobtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nXarray bundles portions of pandas, NumPy and Seaborn, all of which are\navailable under a \"3-clause BSD\" license:\n\npandas: setup.py, xarray/util/print_versions.py\nNumPy: xarray/core/npcompat.py\nSeaborn: _determine_cmap_params in xarray/core/plot/utils.py\n\nXarray also bundles portions of CPython, which is available under the\n\"Python Software Foundation License\" in xarray/core/pycompat.py.\nXarray uses icons from the icomoon package (free version), which is\navailable under the \"CC BY 4.0\" license.\nThe full text of these licenses are included in the licenses directory.\n\n\n", "description": "einstats - Stats and linear algebra for xarray."}, {"name": "xarray-einstats", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nxarray-einstats\nInstallation\nOverview\nContributing\nRelevant links\nSimilar projects\nCite xarray-einstats\n\n\n\n\n\nREADME.md\n\n\n\n\nxarray-einstats\n\n\n\n\n\n\nStats, linear algebra and einops for xarray\nInstallation\nTo install, run\n(.venv) $ pip install xarray-einstats\n\nSee the docs for more extensive install instructions.\nOverview\nAs stated in their website:\n\nxarray makes working with multi-dimensional labeled arrays simple, efficient and fun!\n\nThe code is often more verbose, but it is generally because it is clearer and thus less error prone\nand more intuitive.\nHere are some examples of such trade-off where we believe the increased clarity is worth\nthe extra characters:\n\n\n\nnumpy\nxarray\n\n\n\n\na[2, 5]\nda.sel(drug=\"paracetamol\", subject=5)\n\n\na.mean(axis=(0, 1))\nda.mean(dim=(\"chain\", \"draw\"))\n\n\na.reshape((-1, 10))\nda.stack(sample=(\"chain\", \"draw\"))\n\n\na.transpose(2, 0, 1)\nda.transpose(\"drug\", \"chain\", \"draw\")\n\n\n\nIn some other cases however, using xarray can result in overly verbose code\nthat often also becomes less clear. xarray_einstats provides wrappers\naround some numpy and scipy functions (mostly numpy.linalg and scipy.stats)\nand around einops with an api and features adapted to xarray.\nContinue at the getting started page.\nContributing\nxarray-einstats is in active development and all types of contributions are welcome!\nSee the contributing guide for details on how to contribute.\nRelevant links\n\nDocumentation: https://einstats.python.arviz.org/en/latest/\nContributing guide: https://einstats.python.arviz.org/en/latest/contributing/overview.html\nArviZ project website: https://www.arviz.org\n\nSimilar projects\nHere we list some similar projects we know of. Note that all of\nthem are complementary and don't overlap:\n\nxr-scipy\nxarray-extras\nxhistogram\nxrft\n\nCite xarray-einstats\nIf you use this software, please cite it using the following template and the version\nspecific DOI provided by Zenodo. Click on the badge to go to the Zenodo page\nand select the DOI corresponding to the version you used\n\n\nOriol Abril-Pla. (2022). arviz-devs/xarray-einstats <version>. Zenodo. <version_doi>\n\nor in bibtex format:\n@software{xarray_einstats2022,\n  author       = {Abril-Pla, Oriol},\n  title        = {{xarray-einstats}},\n  year         = 2022,\n  url          = {https://github.com/arviz-devs/xarray-einstats}\n  publisher    = {Zenodo},\n  version      = {<version>},\n  doi          = {<version_doi>},\n}\n\n\n\n"}, {"name": "wsproto", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nPure Python, pure state-machine WebSocket implementation\nUsage\nDocumentation\nContributing\nLicense\nAuthors\n\n\n\n\n\nREADME.rst\n\n\n\n\nPure Python, pure state-machine WebSocket implementation\n\n\n\n\n\n\n\nThis repository contains a pure-Python implementation of a WebSocket protocol\nstack. It's written from the ground up to be embeddable in whatever program you\nchoose to use, ensuring that you can communicate via WebSockets, as defined in\nRFC6455, regardless of your programming\nparadigm.\nThis repository does not provide a parsing layer, a network layer, or any rules\nabout concurrency. Instead, it's a purely in-memory solution, defined in terms\nof data actions and WebSocket frames. RFC6455 and Compression Extensions for\nWebSocket via RFC7692 are fully\nsupported.\nwsproto supports Python 3.6.1 or higher.\nTo install it, just run:\n$ pip install wsproto\n\nUsage\nLet's assume you have some form of network socket available. wsproto client\nconnections automatically generate a HTTP request to initiate the WebSocket\nhandshake. To create a WebSocket client connection:\nfrom wsproto import WSConnection, ConnectionType\nfrom wsproto.events import Request\n\nws = WSConnection(ConnectionType.CLIENT)\nws.send(Request(host='echo.websocket.org', target='/'))\nTo create a WebSocket server connection:\nfrom wsproto.connection import WSConnection, ConnectionType\n\nws = WSConnection(ConnectionType.SERVER)\nEvery time you send a message, or call a ping, or simply if you receive incoming\ndata, wsproto might respond with some outgoing data that you have to send:\nsome_socket.send(ws.bytes_to_send())\nBoth connection types need to receive incoming data:\nws.receive_data(some_byte_string_of_data)\nAnd wsproto will issue events if the data contains any WebSocket messages or state changes:\nfor event in ws.events():\n    if isinstance(event, Request):\n        # only client connections get this event\n        ws.send(AcceptConnection())\n    elif isinstance(event, CloseConnection):\n        # guess nobody wants to talk to us any more...\n    elif isinstance(event, TextMessage):\n        print('We got text!', event.data)\n    elif isinstance(event, BytesMessage):\n        print('We got bytes!', event.data)\nTake a look at our docs for a full list of events\n<https://wsproto.readthedocs.io/en/latest/api.html#events>!\n\nDocumentation\nDocumentation is available at https://wsproto.readthedocs.io/en/latest/.\n\nContributing\nwsproto welcomes contributions from anyone! Unlike many other projects we\nare happy to accept cosmetic contributions and small contributions, in addition\nto large feature requests and changes.\nBefore you contribute (either by opening an issue or filing a pull request),\nplease read the contribution guidelines.\n\nLicense\nwsproto is made available under the MIT License. For more details, see the\nLICENSE file in the repository.\n\nAuthors\nwsproto was created by @jeamland, and is maintained by the python-hyper\ncommunity.\n\n\n", "description": "WebSocket implementation."}, {"name": "wrapt", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nwrapt\nDocumentation\nQuick Start\nRepository\n\n\n\n\n\nREADME.rst\n\n\n\n\nwrapt\n \nThe aim of the wrapt module is to provide a transparent object proxy\nfor Python, which can be used as the basis for the construction of function\nwrappers and decorator functions.\nThe wrapt module focuses very much on correctness. It therefore goes\nway beyond existing mechanisms such as functools.wraps() to ensure that\ndecorators preserve introspectability, signatures, type checking abilities\netc. The decorators that can be constructed using this module will work in\nfar more scenarios than typical decorators and provide more predictable and\nconsistent behaviour.\nTo ensure that the overhead is as minimal as possible, a C extension module\nis used for performance critical components. An automatic fallback to a\npure Python implementation is also provided where a target system does not\nhave a compiler to allow the C extension to be compiled.\n\nDocumentation\nFor further information on the wrapt module see:\n\nhttp://wrapt.readthedocs.org/\n\n\nQuick Start\nTo implement your decorator you need to first define a wrapper function.\nThis will be called each time a decorated function is called. The wrapper\nfunction needs to take four positional arguments:\n\nwrapped - The wrapped function which in turns needs to be called by your wrapper function.\ninstance - The object to which the wrapped function was bound when it was called.\nargs - The list of positional arguments supplied when the decorated function was called.\nkwargs - The dictionary of keyword arguments supplied when the decorated function was called.\n\nThe wrapper function would do whatever it needs to, but would usually in\nturn call the wrapped function that is passed in via the wrapped\nargument.\nThe decorator @wrapt.decorator then needs to be applied to the wrapper\nfunction to convert it into a decorator which can in turn be applied to\nother functions.\nimport wrapt\n\n@wrapt.decorator\ndef pass_through(wrapped, instance, args, kwargs):\n    return wrapped(*args, **kwargs)\n\n@pass_through\ndef function():\n    pass\nIf you wish to implement a decorator which accepts arguments, then wrap the\ndefinition of the decorator in a function closure. Any arguments supplied\nto the outer function when the decorator is applied, will be available to\nthe inner wrapper when the wrapped function is called.\nimport wrapt\n\ndef with_arguments(myarg1, myarg2):\n    @wrapt.decorator\n    def wrapper(wrapped, instance, args, kwargs):\n        return wrapped(*args, **kwargs)\n    return wrapper\n\n@with_arguments(1, 2)\ndef function():\n    pass\nWhen applied to a normal function or static method, the wrapper function\nwhen called will be passed None as the instance argument.\nWhen applied to an instance method, the wrapper function when called will\nbe passed the instance of the class the method is being called on as the\ninstance argument. This will be the case even when the instance method\nwas called explicitly via the class and the instance passed as the first\nargument. That is, the instance will never be passed as part of args.\nWhen applied to a class method, the wrapper function when called will be\npassed the class type as the instance argument.\nWhen applied to a class, the wrapper function when called will be passed\nNone as the instance argument. The wrapped argument in this\ncase will be the class.\nThe above rules can be summarised with the following example.\nimport inspect\n\n@wrapt.decorator\ndef universal(wrapped, instance, args, kwargs):\n    if instance is None:\n        if inspect.isclass(wrapped):\n            # Decorator was applied to a class.\n            return wrapped(*args, **kwargs)\n        else:\n            # Decorator was applied to a function or staticmethod.\n            return wrapped(*args, **kwargs)\n    else:\n        if inspect.isclass(instance):\n            # Decorator was applied to a classmethod.\n            return wrapped(*args, **kwargs)\n        else:\n            # Decorator was applied to an instancemethod.\n            return wrapped(*args, **kwargs)\nUsing these checks it is therefore possible to create a universal decorator\nthat can be applied in all situations. It is no longer necessary to create\ndifferent variants of decorators for normal functions and instance methods,\nor use additional wrappers to convert a function decorator into one that\nwill work for instance methods.\nIn all cases, the wrapped function passed to the wrapper function is called\nin the same way, with args and kwargs being passed. The\ninstance argument doesn't need to be used in calling the wrapped\nfunction.\n\nRepository\nFull source code for the wrapt module, including documentation files\nand unit tests, can be obtained from github.\n\nhttps://github.com/GrahamDumpleton/wrapt\n\n\n\n", "description": "Decorator to wrap functions and methods."}, {"name": "wordcloud", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nword_cloud\nInstallation\nInstallation notes\nExamples\nCommand-line usage\nLicensing\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\nword_cloud\nA little word cloud generator in Python. Read more about it on the blog\npost or the website.\nThe code is tested against Python 3.7, 3.8, 3.9, 3.10, 3.11.\nInstallation\nIf you are using pip:\npip install wordcloud\n\nIf you are using conda, you can install from the conda-forge channel:\nconda install -c conda-forge wordcloud\n\nInstallation notes\nwordcloud depends on numpy, pillow, and matplotlib.\nIf there are no wheels available for your version of python, installing the\npackage requires having a C compiler set up. Before installing a compiler, report\nan issue describing the version of python and operating system being used.\nExamples\nCheck out examples/simple.py for a short intro. A sample output is:\n\nOr run examples/masked.py to see more options. A sample output is:\n\nGetting fancy with some colors:\n\nGenerating wordclouds for Arabic:\n\nCommand-line usage\nThe wordcloud_cli tool can be used to generate word clouds directly from the command-line:\n$ wordcloud_cli --text mytext.txt --imagefile wordcloud.png\n\nIf you're dealing with PDF files, then pdftotext, included by default with many Linux distribution, comes in handy:\n$ pdftotext mydocument.pdf - | wordcloud_cli --imagefile wordcloud.png\n\nIn the previous example, the - argument orders pdftotext to write the resulting text to stdout, which is then piped to the stdin of wordcloud_cli.py.\nUse wordcloud_cli --help so see all available options.\nLicensing\nThe wordcloud library is MIT licenced, but contains DroidSansMono.ttf, a true type font by Google, that is apache licensed.\nThe font is by no means integral, and any other font can be used by setting the font_path variable when creating a WordCloud object.\n\n\n", "description": "Word cloud generator."}, {"name": "werkzeug", "readme": "\nwerkzeug German noun: \u201ctool\u201d. Etymology: werk (\u201cwork\u201d), zeug (\u201cstuff\u201d)\nWerkzeug is a comprehensive WSGI web application library. It began as\na simple collection of various utilities for WSGI applications and has\nbecome one of the most advanced WSGI utility libraries.\nIt includes:\n\nAn interactive debugger that allows inspecting stack traces and\nsource code in the browser with an interactive interpreter for any\nframe in the stack.\nA full-featured request object with objects to interact with\nheaders, query args, form data, files, and cookies.\nA response object that can wrap other WSGI applications and handle\nstreaming data.\nA routing system for matching URLs to endpoints and generating URLs\nfor endpoints, with an extensible system for capturing variables\nfrom URLs.\nHTTP utilities to handle entity tags, cache control, dates, user\nagents, cookies, files, and more.\nA threaded WSGI server for use while developing applications\nlocally.\nA test client for simulating HTTP requests during testing without\nrequiring running a server.\n\nWerkzeug doesn\u2019t enforce any dependencies. It is up to the developer to\nchoose a template engine, database adapter, and even how to handle\nrequests. It can be used to build all sorts of end user applications\nsuch as blogs, wikis, or bulletin boards.\nFlask wraps Werkzeug, using it to handle the details of WSGI while\nproviding more structure and patterns for defining powerful\napplications.\n\nInstalling\nInstall and update using pip:\npip install -U Werkzeug\n\n\nA Simple Example\nfrom werkzeug.wrappers import Request, Response\n\n@Request.application\ndef application(request):\n    return Response('Hello, World!')\n\nif __name__ == '__main__':\n    from werkzeug.serving import run_simple\n    run_simple('localhost', 4000, application)\n\n\nDonate\nThe Pallets organization develops and supports Werkzeug and other\npopular packages. In order to grow the community of contributors and\nusers, and allow the maintainers to devote more time to the projects,\nplease donate today.\n\n\nLinks\n\nDocumentation: https://werkzeug.palletsprojects.com/\nChanges: https://werkzeug.palletsprojects.com/changes/\nPyPI Releases: https://pypi.org/project/Werkzeug/\nSource Code: https://github.com/pallets/werkzeug/\nIssue Tracker: https://github.com/pallets/werkzeug/issues/\nChat: https://discord.gg/pallets\n\n\n", "description": "WSGI utility library for web applications."}, {"name": "websockets", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWhat is websockets?\nwebsockets for enterprise\nWhy should I use websockets?\nWhy shouldn't I use websockets?\nWhat else?\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\n \n \n  \n \n\nWhat is websockets?\nwebsockets is a library for building WebSocket servers and clients in Python\nwith a focus on correctness, simplicity, robustness, and performance.\nBuilt on top of asyncio, Python's standard asynchronous I/O framework, the\ndefault implementation provides an elegant coroutine-based API.\nAn implementation on top of threading and a Sans-I/O implementation are also\navailable.\nDocumentation is available on Read the Docs.\nHere's an echo server with the asyncio API:\n#!/usr/bin/env python\n\nimport asyncio\nfrom websockets.server import serve\n\nasync def echo(websocket):\n    async for message in websocket:\n        await websocket.send(message)\n\nasync def main():\n    async with serve(echo, \"localhost\", 8765):\n        await asyncio.Future()  # run forever\n\nasyncio.run(main())\nHere's how a client sends and receives messages with the threading API:\n#!/usr/bin/env python\n\nfrom websockets.sync.client import connect\n\ndef hello():\n    with connect(\"ws://localhost:8765\") as websocket:\n        websocket.send(\"Hello world!\")\n        message = websocket.recv()\n        print(f\"Received: {message}\")\n\nhello()\nDoes that look good?\nGet started with the tutorial!\n\n\nwebsockets for enterprise\nAvailable as part of the Tidelift Subscription\nThe maintainers of websockets and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. Learn more.\n\n(If you contribute to websockets and would like to become an official support provider, let me know.)\nWhy should I use websockets?\nThe development of websockets is shaped by four principles:\n\nCorrectness: websockets is heavily tested for compliance with\nRFC 6455. Continuous integration fails under 100% branch coverage.\nSimplicity: all you need to understand is msg = await ws.recv() and\nawait ws.send(msg). websockets takes care of managing connections\nso you can focus on your application.\nRobustness: websockets is built for production. For example, it was\nthe only library to handle backpressure correctly before the issue\nbecame widely known in the Python community.\nPerformance: memory usage is optimized and configurable. A C extension\naccelerates expensive operations. It's pre-compiled for Linux, macOS and\nWindows and packaged in the wheel format for each system and Python version.\n\nDocumentation is a first class concern in the project. Head over to Read the\nDocs and see for yourself.\n\nWhy shouldn't I use websockets?\n\nIf you prefer callbacks over coroutines: websockets was created to\nprovide the best coroutine-based API to manage WebSocket connections in\nPython. Pick another library for a callback-based API.\n\nIf you're looking for a mixed HTTP / WebSocket library: websockets aims\nat being an excellent implementation of RFC 6455: The WebSocket Protocol\nand RFC 7692: Compression Extensions for WebSocket. Its support for HTTP\nis minimal \u2014 just enough for an HTTP health check.\nIf you want to do both in the same server, look at HTTP frameworks that\nbuild on top of websockets to support WebSocket connections, like\nSanic.\n\n\n\nWhat else?\nBug reports, patches and suggestions are welcome!\nTo report a security vulnerability, please use the Tidelift security\ncontact. Tidelift will coordinate the fix and disclosure.\nFor anything else, please open an issue or send a pull request.\nParticipants must uphold the Contributor Covenant code of conduct.\nwebsockets is released under the BSD license.\n\n\n", "description": "Library for WebSocket clients and servers.", "category": "Web"}, {"name": "websocket-client", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nwebsocket-client\nDocumentation\nContributing\nInstallation\nUsage Tips\nPerformance\nExamples\nLong-lived Connection\nShort-lived Connection\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\n\n\n\nwebsocket-client\nwebsocket-client is a WebSocket client for Python. It provides access\nto low level APIs for WebSockets. websocket-client implements version\nhybi-13\nof the WebSocket protocol. This client does not currently support the\npermessage-deflate extension from\nRFC 7692.\nDocumentation\nThis project's documentation can be found at\nhttps://websocket-client.readthedocs.io/\nContributing\nPlease see the contribution guidelines\nInstallation\nYou can use either python3 setup.py install or pip3 install websocket-client\nto install. This module is tested on Python 3.8+.\nThere are several optional dependencies that can be installed to enable\nspecific websocket-client features.\n\nTo install python-socks for proxy usage and wsaccel for a minor performance boost, use:\npip3 install websocket-client[optional]\nTo install websockets to run unit tests using the local echo server, use:\npip3 install websocket-client[test]\nTo install Sphinx and sphinx_rtd_theme to build project documentation, use:\npip3 install websocket-client[docs]\n\nWhile not a strict dependency, rel\nis useful when using run_forever with automatic reconnect. Install rel with pip3 install rel.\nFootnote: Some shells, such as zsh, require you to escape the [ and ] characters with a \\.\nUsage Tips\nCheck out the documentation's FAQ for additional guidelines:\nhttps://websocket-client.readthedocs.io/en/latest/faq.html\nKnown issues with this library include lack of WebSocket Compression\nsupport (RFC 7692) and minimal threading documentation/support.\nPerformance\nThe send and validate_utf8 methods can sometimes be bottleneck.\nYou can disable UTF8 validation in this library (and receive a\nperformance enhancement) with the skip_utf8_validation parameter.\nIf you want to get better performance, install wsaccel. While\nwebsocket-client does not depend on wsaccel, it will be used if\navailable. wsaccel doubles the speed of UTF8 validation and\noffers a very minor 10% performance boost when masking the\npayload data as part of the send process. Numpy used to\nbe a suggested performance enhancement alternative, but\nissue #687\nfound it didn't help.\nExamples\nMany more examples are found in the\nexamples documentation.\nLong-lived Connection\nMost real-world WebSockets situations involve longer-lived connections.\nThe WebSocketApp run_forever loop will automatically try to reconnect\nto an open WebSocket connection when a network\nconnection is lost if it is provided with:\n\na dispatcher argument (async dispatcher like rel or pyevent)\na non-zero reconnect argument (delay between disconnection and attempted reconnection)\n\nrun_forever provides a variety of event-based connection controls\nusing callbacks like on_message and on_error.\nrun_forever does not automatically reconnect if the server\ncloses the WebSocket gracefully (returning\na standard websocket close code).\nThis is the logic behind the decision.\nCustomizing behavior when the server closes\nthe WebSocket should be handled in the on_close callback.\nThis example uses rel\nfor the dispatcher to provide automatic reconnection.\nimport websocket\nimport _thread\nimport time\nimport rel\n\ndef on_message(ws, message):\n    print(message)\n\ndef on_error(ws, error):\n    print(error)\n\ndef on_close(ws, close_status_code, close_msg):\n    print(\"### closed ###\")\n\ndef on_open(ws):\n    print(\"Opened connection\")\n\nif __name__ == \"__main__\":\n    websocket.enableTrace(True)\n    ws = websocket.WebSocketApp(\"wss://api.gemini.com/v1/marketdata/BTCUSD\",\n                              on_open=on_open,\n                              on_message=on_message,\n                              on_error=on_error,\n                              on_close=on_close)\n\n    ws.run_forever(dispatcher=rel, reconnect=5)  # Set dispatcher to automatic reconnection, 5 second reconnect delay if connection closed unexpectedly\n    rel.signal(2, rel.abort)  # Keyboard Interrupt\n    rel.dispatch()\nShort-lived Connection\nThis is if you want to communicate a short message and disconnect\nimmediately when done. For example, if you want to confirm that a WebSocket\nserver is running and responds properly to a specific request.\nfrom websocket import create_connection\n\nws = create_connection(\"ws://echo.websocket.events/\")\nprint(ws.recv())\nprint(\"Sending 'Hello, World'...\")\nws.send(\"Hello, World\")\nprint(\"Sent\")\nprint(\"Receiving...\")\nresult =  ws.recv()\nprint(\"Received '%s'\" % result)\nws.close()\n\n\n"}, {"name": "webencodings", "readme": "\n\n\n\nREADME.rst\n\n\n\n\npython-webencodings\nThis is a Python implementation of the WHATWG Encoding standard.\n\nLatest documentation: http://packages.python.org/webencodings/\nSource code and issue tracker:\nhttps://github.com/gsnedders/python-webencodings\nPyPI releases: http://pypi.python.org/pypi/webencodings\nLicense: BSD\nPython 2.6+ and 3.3+\n\nIn order to be compatible with legacy web content\nwhen interpreting something like Content-Type: text/html; charset=latin1,\ntools need to use a particular set of aliases for encoding labels\nas well as some overriding rules.\nFor example, US-ASCII and iso-8859-1 on the web are actually\naliases for windows-1252, and an UTF-8 or UTF-16 BOM takes precedence\nover any other encoding declaration.\nThe Encoding standard defines all such details so that implementations do\nnot have to reverse-engineer each other.\nThis module has encoding labels and BOM detection,\nbut the actual implementation for encoders and decoders is Python\u2019s.\n\n\n", "description": "Implementation of WHATWG Encoding standard."}, {"name": "weasyprint", "readme": "\nThe Awesome Document Factory\nWeasyPrint is a smart solution helping web developers to create PDF\ndocuments. It turns simple HTML pages into gorgeous statistical reports,\ninvoices, tickets\u2026\nFrom a technical point of view, WeasyPrint is a visual rendering engine for\nHTML and CSS that can export to PDF. It aims to support web standards for\nprinting. WeasyPrint is free software made available under a BSD license.\nIt is based on various libraries but not on a full rendering engine like\nWebKit or Gecko. The CSS layout engine is written in Python, designed for\npagination, and meant to be easy to hack on.\n\nFree software: BSD license\nFor Python 3.7+, tested on CPython and PyPy\nDocumentation: https://doc.courtbouillon.org/weasyprint\nExamples: https://weasyprint.org/#samples\nChangelog: https://github.com/Kozea/WeasyPrint/releases\nCode, issues, tests: https://github.com/Kozea/WeasyPrint\nCode of conduct: https://www.courtbouillon.org/code-of-conduct\nProfessional support: https://www.courtbouillon.org\nDonation: https://opencollective.com/courtbouillon\n\nWeasyPrint has been created and developed by Kozea (https://kozea.fr/).\nProfessional support, maintenance and community management is provided by\nCourtBouillon (https://www.courtbouillon.org/).\nCopyrights are retained by their contributors, no copyright assignment is\nrequired to contribute to WeasyPrint. Unless explicitly stated otherwise, any\ncontribution intentionally submitted for inclusion is licensed under the BSD\n3-clause license, without any additional terms or conditions. For full\nauthorship information, see the version control history.\n", "description": "HTML/CSS to PDF generator."}, {"name": "wcwidth", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nIntroduction\nInstallation\nExample\nChoosing a Version\nwcwidth, wcswidth\nDeveloping\nUses\nOther Languages\nHistory\n\n\n\n\n\nREADME.rst\n\n\n\n\n \n \n\n\nIntroduction\nThis library is mainly for CLI programs that carefully produce output for\nTerminals, or make pretend to be an emulator.\nProblem Statement: The printable length of most strings are equal to the\nnumber of cells they occupy on the screen 1 character : 1 cell.  However,\nthere are categories of characters that occupy 2 cells (full-wide), and\nothers that occupy 0 cells (zero-width).\nSolution: POSIX.1-2001 and POSIX.1-2008 conforming systems provide\nwcwidth(3) and wcswidth(3) C functions of which this python module's\nfunctions precisely copy.  These functions return the number of cells a\nunicode string is expected to occupy.\n\nInstallation\nThe stable version of this package is maintained on pypi, install using pip:\npip install wcwidth\n\n\nExample\nProblem: given the following phrase (Japanese),\n\n>>>  text = u'\u30b3\u30f3\u30cb\u30c1\u30cf'\n\nPython incorrectly uses the string length of 5 codepoints rather than the\nprintible length of 10 cells, so that when using the rjust function, the\noutput length is wrong:\n>>> print(len('\u30b3\u30f3\u30cb\u30c1\u30cf'))\n5\n\n>>> print('\u30b3\u30f3\u30cb\u30c1\u30cf'.rjust(20, '_'))\n_______________\u30b3\u30f3\u30cb\u30c1\u30cf\n\nBy defining our own \"rjust\" function that uses wcwidth, we can correct this:\n>>> def wc_rjust(text, length, padding=' '):\n...    from wcwidth import wcswidth\n...    return padding * max(0, (length - wcswidth(text))) + text\n...\n\nOur Solution uses wcswidth to determine the string length correctly:\n>>> from wcwidth import wcswidth\n>>> print(wcswidth('\u30b3\u30f3\u30cb\u30c1\u30cf'))\n10\n\n>>> print(wc_rjust('\u30b3\u30f3\u30cb\u30c1\u30cf', 20, '_'))\n__________\u30b3\u30f3\u30cb\u30c1\u30cf\n\n\nChoosing a Version\nExport an environment variable, UNICODE_VERSION. This should be done by\nterminal emulators or those developers experimenting with authoring one of\ntheir own, from shell:\n$ export UNICODE_VERSION=13.0\n\nIf unspecified, the latest version is used. If your Terminal Emulator does not\nexport this variable, you can use the jquast/ucs-detect utility to\nautomatically detect and export it to your shell.\n\nwcwidth, wcswidth\nUse function wcwidth() to determine the length of a single unicode\ncharacter, and wcswidth() to determine the length of many, a string\nof unicode characters.\nBriefly, return values of function wcwidth() are:\n\n-1\nIndeterminate (not printable).\n0\nDoes not advance the cursor, such as NULL or Combining.\n2\nCharacters of category East Asian Wide (W) or East Asian\nFull-width (F) which are displayed using two terminal cells.\n1\nAll others.\n\nFunction wcswidth() simply returns the sum of all values for each character\nalong a string, or -1 when it occurs anywhere along a string.\nFull API Documentation at http://wcwidth.readthedocs.org\n\nDeveloping\nInstall wcwidth in editable mode:\npip install -e.\n\nExecute unit tests using tox:\ntox\n\nRegenerate python code tables from latest Unicode Specification data files:\ntox -e update\n\nSupplementary tools for browsing and testing terminals for wide unicode\ncharacters are found in the bin/ of this project's source code.  Just ensure\nto first pip install -erequirements-develop.txt from this projects main\nfolder. For example, an interactive browser for testing:\npython ./bin/wcwidth-browser.py\n\n\nUses\nThis library is used in:\n\njquast/blessed: a thin, practical wrapper around terminal capabilities in\nPython.\njonathanslenders/python-prompt-toolkit: a Library for building powerful\ninteractive command lines in Python.\ndbcli/pgcli: Postgres CLI with autocompletion and syntax highlighting.\nthomasballinger/curtsies: a Curses-like terminal wrapper with a display\nbased on compositing 2d arrays of text.\nselectel/pyte: Simple VTXXX-compatible linux terminal emulator.\nastanin/python-tabulate: Pretty-print tabular data in Python, a library\nand a command-line utility.\nLuminosoInsight/python-ftfy: Fixes mojibake and other glitches in Unicode\ntext.\nnbedos/termtosvg: Terminal recorder that renders sessions as SVG\nanimations.\npeterbrittain/asciimatics: Package to help people create full-screen text\nUIs.\n\n\nOther Languages\n\ntimoxley/wcwidth: JavaScript\njanlelis/unicode-display_width: Ruby\nalecrabbit/php-wcwidth: PHP\nText::CharWidth: Perl\nbluebear94/Terminal-WCWidth: Perl 6\nmattn/go-runewidth: Go\nemugel/wcwidth: Haxe\naperezdc/lua-wcwidth: Lua\njoachimschmidt557/zig-wcwidth: Zig\nfumiyas/wcwidth-cjk: LD_PRELOAD override\njoshuarubin/wcwidth9: Unicode version 9 in C\n\n\nHistory\n\n0.2.6 2023-01-14\n\nUpdated tables to include Unicode Specification 14.0.0 and 15.0.0.\nChanged developer tools to use pip-compile, and to use jinja2 templates\nfor code generation in bin/update-tables.py to prepare for possible\ncompiler optimization release.\n\n\n0.2.1 .. 0.2.5 2020-06-23\n\nRepository changes to update tests and packaging issues, and\nbegin tagging repository with matching release versions.\n\n\n0.2.0 2020-06-01\n\nEnhancement: Unicode version may be selected by exporting the\nEnvironment variable UNICODE_VERSION, such as 13.0, or 6.3.0.\nSee the jquast/ucs-detect CLI utility for automatic detection.\nEnhancement:\nAPI Documentation is published to readthedocs.org.\nUpdated tables for all Unicode Specifications with files\npublished in a programmatically consumable format, versions 4.1.0\nthrough 13.0\n\n\n0.1.9 2020-03-22\n\nPerformance optimization by Avram Lubkin, PR #35.\nUpdated tables to Unicode Specification 13.0.0.\n\n\n0.1.8 2020-01-01\n\nUpdated tables to Unicode Specification 12.0.0. (PR #30).\n\n\n0.1.7 2016-07-01\n\nUpdated tables to Unicode Specification 9.0.0. (PR #18).\n\n\n0.1.6 2016-01-08 Production/Stable\n\nLICENSE file now included with distribution.\n\n\n0.1.5 2015-09-13 Alpha\n\nBugfix:\nResolution of \"combining character width\" issue, most especially\nthose that previously returned -1 now often (correctly) return 0.\nresolved by Philip Craig via PR #11.\nDeprecated:\nThe module path wcwidth.table_comb is no longer available,\nit has been superseded by module path wcwidth.table_zero.\n\n\n0.1.4 2014-11-20 Pre-Alpha\n\nFeature: wcswidth() now determines printable length\nfor (most) combining characters.  The developer's tool\nbin/wcwidth-browser.py is improved to display combining\ncharacters when provided the --combining option\n(Thomas Ballinger and Leta Montopoli PR #5).\nFeature: added static analysis (prospector) to testing\nframework.\n\n\n0.1.3 2014-10-29 Pre-Alpha\n\nBugfix: 2nd parameter of wcswidth was not honored.\n(Thomas Ballinger, PR #4).\n\n\n0.1.2 2014-10-28 Pre-Alpha\n\nUpdated tables to Unicode Specification 7.0.0.\n(Thomas Ballinger, PR #3).\n\n\n0.1.1 2014-05-14 Pre-Alpha\n\nInitial release to pypi, Based on Unicode Specification 6.3.0\n\n\n\nThis code was originally derived directly from C code of the same name,\nwhose latest version is available at\nhttp://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c:\n* Markus Kuhn -- 2007-05-26 (Unicode 5.0)\n*\n* Permission to use, copy, modify, and distribute this software\n* for any purpose and without fee is hereby granted. The author\n* disclaims all warranties with regard to this software.\n\n\n\n", "description": "Measures number of wide characters in a terminal."}, {"name": "watchfiles", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nwatchfiles\nInstallation\nUsage\nwatch Usage\nawatch Usage\nrun_process Usage\narun_process Usage\nCLI\n\n\n\n\n\nREADME.md\n\n\n\n\nwatchfiles\n\n\n\n\n\nSimple, modern and high performance file watching and code reload in python.\n\nDocumentation: watchfiles.helpmanual.io\nSource Code: github.com/samuelcolvin/watchfiles\n\nUnderlying file system notifications are handled by the Notify rust library.\nThis package was previously named \"watchgod\",\nsee the migration guide for more information.\nInstallation\nwatchfiles requires Python 3.7 - 3.10.\npip install watchfiles\nBinaries are available for:\n\nLinux: x86_64, aarch64, i686, armv7l, musl-x86_64 & musl-aarch64\nMacOS: x86_64 & arm64 (except python 3.7)\nWindows: amd64 & win32\n\nOtherwise, you can install from source which requires Rust stable to be installed.\nUsage\nHere are some examples of what watchfiles can do:\nwatch Usage\nfrom watchfiles import watch\n\nfor changes in watch('./path/to/dir'):\n    print(changes)\nSee watch docs for more details.\nawatch Usage\nimport asyncio\nfrom watchfiles import awatch\n\nasync def main():\n    async for changes in awatch('/path/to/dir'):\n        print(changes)\n\nasyncio.run(main())\nSee awatch docs for more details.\nrun_process Usage\nfrom watchfiles import run_process\n\ndef foobar(a, b, c):\n    ...\n\nif __name__ == '__main__':\n    run_process('./path/to/dir', target=foobar, args=(1, 2, 3))\nSee run_process docs for more details.\narun_process Usage\nimport asyncio\nfrom watchfiles import arun_process\n\ndef foobar(a, b, c):\n    ...\n\nasync def main():\n    await arun_process('./path/to/dir', target=foobar, args=(1, 2, 3))\n\nif __name__ == '__main__':\n    asyncio.run(main())\nSee arun_process docs for more details.\nCLI\nwatchfiles also comes with a CLI for running and reloading code. To run some command when files in src change:\nwatchfiles \"some command\" src\n\nFor more information, see the CLI docs.\nOr run\nwatchfiles --help\n\n\n", "description": "File watching and code reloading."}, {"name": "wasabi", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nwasabi: A lightweight console printing and formatting toolkit\n\ud83d\udcac FAQ\nAre you going to add more features?\nCan I use this for my projects?\nWhy wasabi?\n\u231b\ufe0f Installation\n\ud83c\udf9b API\nfunctionmsg\nclassPrinter\nmethodPrinter.__init__\nmethodPrinter.text\nmethodPrinter.good, Printer.fail, Printer.warn, Printer.info\nmethodPrinter.divider\ncontextmanagerPrinter.loading\nmethodPrinter.table, Printer.row\npropertyPrinter.counts\nTables\nfunctiontable\nfunctionrow\nclassTracebackPrinter\nmethodTracebackPrinter.__init__\nmethodTracebackPrinter.__call__\nclassMarkdownRenderer\nmethodMarkdownRenderer.__init__\nmethodMarkdownRenderer.add\npropertyMarkdownRenderer.text\nmethodMarkdownRenderer.table\nmethodMarkdownRenderer.title\nmethodMarkdownRenderer.list\nmethodMarkdownRenderer.link\nmethodMarkdownRenderer.code_block\nmethodMarkdownRenderer.code, MarkdownRenderer.bold, MarkdownRenderer.italic\nUtilities\nfunctioncolor\nfunctionwrap\nfunctiondiff_strings\nEnvironment variables\n\ud83d\udd14 Run tests\n\n\n\n\n\nREADME.md\n\n\n\n\nwasabi: A lightweight console printing and formatting toolkit\nOver the years, I've written countless implementations of coloring and\nformatting utilities to output messages in our libraries like\nspaCy, Thinc and\nProdigy. While there are many other great open-source\noptions, I've always ended up wanting something slightly different or slightly\ncustom.\nThis package is still a work in progress and aims to bundle those utilities in a\nstandardised way so they can be shared across our other projects. It's super\nlightweight, has zero dependencies and works with Python 3.6+.\n\n\n\n\n\n\n\ud83d\udcac FAQ\nAre you going to add more features?\nYes, there's still a few of helpers and features to port over. However, the new\nfeatures will be heavily biased by what we (think we) need. I always appreciate\npull requests to improve the existing functionality \u2013 but I want to keep this\nlibrary as simple, lightweight and specific as possible.\nCan I use this for my projects?\nSure, if you like it, feel free to adopt it! Just keep in mind that the package\nis very specific and not intended to be a full-featured and fully customisable\nformatting library. If that's what you're looking for, you might want to try\nother packages \u2013 for example, colored,\ncrayons,\ncolorful,\ntabulate,\nconsole or\npy-term, to name a few.\nWhy wasabi?\nI was looking for a short and descriptive name, but everything was already\ntaken. So I ended up naming this package after one of my rats, Wasabi. \ud83d\udc00\n\u231b\ufe0f Installation\npip install wasabi\n\ud83c\udf9b API\nfunction msg\nAn instance of Printer, initialized with the default config. Useful as a quick\nshortcut if you don't need to customize initialization.\nfrom wasabi import msg\n\nmsg.good(\"Success!\")\nclass Printer\nmethod Printer.__init__\nfrom wasabi import Printer\n\nmsg = Printer()\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\npretty\nbool\nPretty-print output with colors and icons.\nTrue\n\n\nno_print\nbool\nDon't actually print, just return.\nFalse\n\n\ncolors\ndict\nAdd or overwrite color values, names mapped to 0-256.\nNone\n\n\nicons\ndict\nAdd or overwrite icon. Name mapped to unicode.\nNone\n\n\nline_max\nint\nMaximum line length (for divider).\n80\n\n\nanimation\nstr\nSteps of loading animation for Printer.loading.\n\"\u2819\u2839\u2838\u283c\u2834\u2826\u2827\u2807\u280f\"\n\n\nanimation_ascii\nstr\nAlternative animation for ASCII terminals.\n\"|/-\\\\\"\n\n\nhide_animation\nbool\nDon't display animation, e.g. for logs.\nFalse\n\n\nignore_warnings\nbool\nDon't output messages of type MESSAGE.WARN.\nFalse\n\n\nenv_prefix\nstr\nPrefix for environment variables, e.g. WASABI_LOG_FRIENDLY.\n\"WASABI\"\n\n\ntimestamp\nbool\nAdd timestamp before output.\nFalse\n\n\nRETURNS\nPrinter\nThe initialized printer.\n-\n\n\n\nmethod Printer.text\nmsg = Printer()\nmsg.text(\"Hello world!\")\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\ntitle\nstr\nThe main text to print.\n\"\"\n\n\ntext\nstr\nOptional additional text to print.\n\"\"\n\n\ncolor\n\u00a0unicode / int\nColor name or value.\nNone\n\n\nicon\nstr\nName of icon to add.\nNone\n\n\nshow\nbool\nWhether to print or not. Can be used to only output messages under certain condition, e.g. if --verbose flag is set.\nTrue\n\n\nspaced\nbool\nWhether to add newlines around the output.\nFalse\n\n\nno_print\nbool\nDon't actually print, just return. Overwrites global setting.\nFalse\n\n\nexits\nint\nIf set, perform a system exit with the given code after printing.\nNone\n\n\n\nmethod Printer.good, Printer.fail, Printer.warn, Printer.info\nPrint special formatted messages.\nmsg = Printer()\nmsg.good(\"Success\")\nmsg.fail(\"Error\")\nmsg.warn(\"Warning\")\nmsg.info(\"Info\")\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\ntitle\nstr\nThe main text to print.\n\"\"\n\n\ntext\nstr\nOptional additional text to print.\n\"\"\n\n\nshow\nbool\nWhether to print or not. Can be used to only output messages under certain condition, e.g. if --verbose flag is set.\nTrue\n\n\nexits\nint\nIf set, perform a system exit with the given code after printing.\nNone\n\n\n\nmethod Printer.divider\nPrint a formatted divider.\nmsg = Printer()\nmsg.divider(\"Heading\")\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\ntext\nstr\nHeadline text. If empty, only the line is printed.\n\"\"\n\n\nchar\nstr\nSingle line character to repeat.\n\"=\"\n\n\nshow\nbool\nWhether to print or not. Can be used to only output messages under certain condition, e.g. if --verbose flag is set.\nTrue\n\n\nicon\nstr\nOptional icon to use with title.\nNone\n\n\n\ncontextmanager Printer.loading\nmsg = Printer()\nwith msg.loading(\"Loading...\"):\n    # Do something here that takes longer\n    time.sleep(10)\nmsg.good(\"Successfully loaded something!\")\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\ntext\nstr\nThe text to display while loading.\n\"Loading...\"\n\n\n\nmethod Printer.table, Printer.row\nSee Tables.\nproperty Printer.counts\nGet the counts of how often the special printers were fired, e.g.\nMESSAGES.GOOD. Can be used to print an overview like \"X warnings\"\nmsg = Printer()\nmsg.good(\"Success\")\nmsg.fail(\"Error\")\nmsg.warn(\"Error\")\n\nprint(msg.counts)\n# Counter({'good': 1, 'fail': 2, 'warn': 0, 'info': 0})\n\n\n\nArgument\nType\nDescription\n\n\n\n\nRETURNS\nCounter\nThe counts for the individual special message types.\n\n\n\nTables\nfunction table\nLightweight helper to format tabular data.\nfrom wasabi import table\n\ndata = [(\"a1\", \"a2\", \"a3\"), (\"b1\", \"b2\", \"b3\")]\nheader = (\"Column 1\", \"Column 2\", \"Column 3\")\nwidths = (8, 9, 10)\naligns = (\"r\", \"c\", \"l\")\nformatted = table(data, header=header, divider=True, widths=widths, aligns=aligns)\nColumn 1   Column 2    Column 3\n--------   ---------   ----------\n      a1      a2       a3\n      b1      b2       b3\n\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\ndata\niterable / dict\nThe data to render. Either a list of lists (one per row) or a dict for two-column tables.\n\n\n\nheader\niterable\nOptional header columns.\nNone\n\n\nfooter\niterable\nOptional footer columns.\nNone\n\n\ndivider\nbool\nShow a divider line between header/footer and body.\nFalse\n\n\nwidths\niterable / \"auto\"\nColumn widths in order. If \"auto\", widths will be calculated automatically based on the largest value.\n\"auto\"\n\n\nmax_col\nint\nMaximum column width.\n30\n\n\nspacing\nint\nNumber of spaces between columns.\n3\n\n\naligns\niterable / unicode\nColumns alignments in order. \"l\" (left, default), \"r\" (right) or \"c\" (center). If If a string, value is used for all columns.\nNone\n\n\nmultiline\nbool\nIf a cell value is a list of a tuple, render it on multiple lines, with one value per line.\nFalse\n\n\nenv_prefix\nunicode\nPrefix for environment variables, e.g. WASABI_LOG_FRIENDLY.\n\"WASABI\"\n\n\ncolor_values\ndict\nAdd or overwrite color values, name mapped to value.\nNone\n\n\nfg_colors\niterable\nForeground colors, one per column. None can be specified for individual columns to retain the default background color.\nNone\n\n\nbg_colors\niterable\nBackground colors, one per column. None can be specified for individual columns to retain the default background color.\nNone\n\n\nRETURNS\nstr\nThe formatted table.\n\n\n\n\nfunction row\nfrom wasabi import row\n\ndata = (\"a1\", \"a2\", \"a3\")\nformatted = row(data)\na1   a2   a3\n\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\ndata\niterable\nThe individual columns to format.\n\n\n\nwidths\nlist / int / \"auto\"\nColumn widths, either one integer for all columns or an iterable of values. If \"auto\", widths will be calculated automatically based on the largest value.\n\"auto\"\n\n\nspacing\nint\nNumber of spaces between columns.\n3\n\n\naligns\nlist\nColumns alignments in order. \"l\" (left), \"r\" (right) or \"c\" (center).\nNone\n\n\nenv_prefix\nunicode\nPrefix for environment variables, e.g. WASABI_LOG_FRIENDLY.\n\"WASABI\"\n\n\nfg_colors\nlist\nForeground colors for the columns, in order. None can be specified for individual columns to retain the default foreground color.\nNone\n\n\nbg_colors\nlist\nBackground colors for the columns, in order. None can be specified for individual columns to retain the default background color.\nNone\n\n\nRETURNS\nstr\nThe formatted row.\n\n\n\n\nclass TracebackPrinter\nHelper to output custom formatted tracebacks and error messages. Currently used\nin Thinc.\nmethod TracebackPrinter.__init__\nInitialize a traceback printer.\nfrom wasabi import TracebackPrinter\n\ntb = TracebackPrinter(tb_base=\"thinc\", tb_exclude=(\"check.py\",))\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\ncolor_error\nstr / int\nColor name or code for errors (passed to color helper).\n\"red\"\n\n\ncolor_tb\nstr / int\nColor name or code for traceback headline (passed to color helper).\n\"blue\"\n\n\ncolor_highlight\nstr / int\nColor name or code for highlighted text (passed to color helper).\n\"yellow\"\n\n\nindent\nint\nNumber of spaces to use for indentation.\n2\n\n\ntb_base\nstr\nName of directory to use to show relative paths. For example, \"thinc\" will look for the last occurence of \"/thinc/\" in a path and only show path to the right of it.\nNone\n\n\ntb_exclude\ntuple\nList of filenames to exclude from traceback.\ntuple()\n\n\nRETURNS\nTracebackPrinter\nThe traceback printer.\n\n\n\n\nmethod TracebackPrinter.__call__\nOutput custom formatted tracebacks and errors.\nfrom wasabi import TracebackPrinter\nimport traceback\n\ntb = TracebackPrinter(tb_base=\"thinc\", tb_exclude=(\"check.py\",))\n\nerror = tb(\"Some error\", \"Error description\", highlight=\"kwargs\", tb=traceback.extract_stack())\nraise ValueError(error)\n  Some error\n  Some error description\n\n  Traceback:\n  \u251c\u2500 <lambda> [61] in .env/lib/python3.6/site-packages/pluggy/manager.py\n  \u251c\u2500\u2500\u2500 _multicall [187] in .env/lib/python3.6/site-packages/pluggy/callers.py\n  \u2514\u2500\u2500\u2500\u2500\u2500 pytest_fixture_setup [969] in .env/lib/python3.6/site-packages/_pytest/fixtures.py\n         >>> result = call_fixture_func(fixturefunc, request, kwargs)\n\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\ntitle\nstr\nThe message title.\n\n\n\n*texts\nstr\nOptional texts to print (one per line).\n\n\n\nhighlight\nstr\nOptional sequence to highlight in the traceback, e.g. the bad value that caused the error.\nFalse\n\n\ntb\niterable\nThe traceback, e.g. generated by traceback.extract_stack().\nNone\n\n\nRETURNS\nstr\nThe formatted traceback. Can be printed or raised by custom exception.\n\n\n\n\nclass MarkdownRenderer\nHelper to create Markdown-formatted content. Will store the blocks added to the\nMarkdown document in order.\nfrom wasabi import MarkdownRenderer\n\nmd = MarkdownRenderer()\nmd.add(md.title(1, \"Hello world\"))\nmd.add(\"This is a paragraph\")\nprint(md.text)\nmethod MarkdownRenderer.__init__\nInitialize a Markdown renderer.\nfrom wasabi import MarkdownRenderer\n\nmd = MarkdownRenderer()\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\nno_emoji\nbool\nDon't include emoji in titles.\nFalse\n\n\nRETURNS\nMarkdownRenderer\nThe renderer.\n\n\n\n\nmethod MarkdownRenderer.add\nAdd a block to the Markdown document.\nfrom wasabi import MarkdownRenderer\n\nmd = MarkdownRenderer()\nmd.add(\"This is a paragraph\")\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\ntext\nstr\nThe content to add.\n\n\n\n\nproperty MarkdownRenderer.text\nThe rendered Markdown document.\nmd = MarkdownRenderer()\nmd.add(\"This is a paragraph\")\nprint(md.text)\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\nRETURNS\nstr\nThe document as a single string.\n\n\n\n\nmethod MarkdownRenderer.table\nCreate a Markdown-formatted table.\nmd = MarkdownRenderer()\ntable = md.table([(\"a\", \"b\"), (\"c\", \"d\")], [\"Column 1\", \"Column 2\"])\nmd.add(table)\n| Column 1 | Column 2 |\n| --- | --- |\n| a | b |\n| c | d |\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\ndata\nIterable[Iterable[str]]\nThe body, one iterable per row, containig an interable of column contents.\n\n\n\nheader\nIterable[str]\nThe column names.\n\n\n\naligns\nIterable[str]\nColumns alignments in order. \"l\" (left, default), \"r\" (right) or \"c\" (center).\nNone\n\n\nRETURNS\nstr\nThe table.\n\n\n\n\nmethod MarkdownRenderer.title\nCreate a Markdown-formatted heading.\nmd = MarkdownRenderer()\nmd.add(md.title(1, \"Hello world\"))\nmd.add(md.title(2, \"Subheading\", \"\ud83d\udc96\"))\n# Hello world\n\n## \ud83d\udc96 Subheading\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\nlevel\nint\nThe heading level, e.g. 3 for ###.\n\n\n\ntext\nstr\nThe heading text.\n\n\n\nemoji\nstr\nOptional emoji to show before heading.\nNone\n\n\nRETURNS\nstr\nThe rendered title.\n\n\n\n\nmethod MarkdownRenderer.list\nCreate a Markdown-formatted non-nested list.\nmd = MarkdownRenderer()\nmd.add(md.list([\"item\", \"other item\"]))\nmd.add(md.list([\"first item\", \"second item\"], numbered=True))\n- item\n- other item\n\n1. first item\n2. second item\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\nitems\nIterable[str]\nThe list items.\n\n\n\nnumbered\nbool\nWhether to use a numbered list.\nFalse\n\n\nRETURNS\nstr\nThe rendered list.\n\n\n\n\nmethod MarkdownRenderer.link\nCreate a Markdown-formatted link.\nmd = MarkdownRenderer()\nmd.add(md.link(\"Google\", \"https://google.com\"))\n[Google](https://google.com)\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\ntext\nstr\nThe link text.\n\n\n\nurl\nstr\nThe link URL.\n\n\n\nRETURNS\nstr\nThe rendered link.\n\n\n\n\nmethod MarkdownRenderer.code_block\nCreate a Markdown-formatted code block.\nmd = MarkdownRenderer()\nmd.add(md.code_block(\"import spacy\", \"python\"))\n```python\nimport spacy\n```\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\ntext\nstr\nThe code text.\n\n\n\nlang\nstr\nOptional code language.\n\"\"\n\n\nRETURNS\nstr\nThe rendered code block.\n\n\n\n\nmethod MarkdownRenderer.code, MarkdownRenderer.bold, MarkdownRenderer.italic\nCreate a Markdown-formatted text.\nmd = MarkdownRenderer()\nmd.add(md.code(\"import spacy\"))\nmd.add(md.bold(\"Hello!\"))\nmd.add(md.italic(\"Emphasis\"))\n`import spacy`\n\n**Hello!**\n\n_Emphasis_\nUtilities\nfunction color\nfrom wasabi import color\n\nformatted = color(\"This is a text\", fg=\"white\", bg=\"green\", bold=True)\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\ntext\nstr\nThe text to be formatted.\n-\n\n\nfg\nstr / int\nForeground color. String name or 0 - 256.\nNone\n\n\nbg\nstr / int\nBackground color. String name or 0 - 256.\nNone\n\n\nbold\nbool\nFormat the text in bold.\nFalse\n\n\nunderline\nbool\nFormat the text by underlining.\nFalse\n\n\nRETURNS\nstr\nThe formatted string.\n\n\n\n\nfunction wrap\nfrom wasabi import wrap\n\nwrapped = wrap(\"Hello world, this is a text.\", indent=2)\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\ntext\nstr\nThe text to wrap.\n-\n\n\nwrap_max\nint\nMaximum line width, including indentation.\n80\n\n\nindent\nint\nNumber of spaces used for indentation.\n4\n\n\nRETURNS\nstr\nThe wrapped text with line breaks.\n\n\n\n\nfunction diff_strings\nfrom wasabi import diff_strings\n\ndiff = diff_strings(\"hello world!\", \"helloo world\")\n\n\n\nArgument\nType\nDescription\nDefault\n\n\n\n\na\nstr\nThe first string to diff.\n\n\n\nb\nstr\nThe second string to diff.\n\n\n\nfg\nstr / int\nForeground color. String name or 0 - 256.\n\"black\"\n\n\nbg\ntuple\nBackground colors as (insert, delete) tuple of string name or 0 - 256.\n(\"green\", \"red\")\n\n\nRETURNS\nstr\nThe formatted diff.\n\n\n\n\nEnvironment variables\nWasabi also respects the following environment variables. The prefix can be\ncustomised on the Printer via the env_prefix argument. For example, setting\nenv_prefix=\"SPACY\" will expect the environment variable SPACY_LOG_FRIENDLY.\n\n\n\nName\nDescription\n\n\n\n\nANSI_COLORS_DISABLED\nDisable colors.\n\n\nWASABI_LOG_FRIENDLY\nMake output nicer for logs (no colors, no animations).\n\n\nWASABI_NO_PRETTY\nDisable pretty printing, e.g. colors and icons.\n\n\n\n\ud83d\udd14 Run tests\nFork or clone the repo, make sure you have pytest installed and then run it on\nthe package directory. The tests are located in\n/wasabi/tests.\npip install pytest\ncd wasabi\npython -m pytest wasabi\n\n\n", "description": "Formatting and printing toolkit for console output."}, {"name": "Wand", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWand\nDocs\nCommunity\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\nWand\nWand is a ctypes-based simple ImageMagick binding for Python,\nsupporting 2.7, 3.3+, and PyPy. All functionalities of MagickWand API are\nimplemented in Wand.\nYou can install the package from PyPI by using pip:\n$ pip install Wand\nOr would you like to enjoy the bleeding edge?  Check out the head\nrevision of the source code from the GitHub repository:\n$ git clone git://github.com/emcconville/wand.git\n$ cd wand/\n$ python setup.py install\n\nDocs\n\nRecent version\nhttps://docs.wand-py.org/\nDevelopment version\nhttps://docs.wand-py.org/en/latest/\n\n\n\n\nCommunity\n\nWebsite\nhttp://wand-py.org/\nGitHub\nhttps://github.com/emcconville/wand\nPackage Index (Cheeseshop)\nhttps://pypi.python.org/pypi/Wand\n\n\nDiscord\nhttps://discord.gg/wtDWDE9fXK\nStack Overflow tag (Q&A)\nhttp://stackoverflow.com/questions/tagged/wand\nContinuous Integration (Travis CI)\nhttps://app.travis-ci.com/emcconville/wand\n\n\nContinuous Integration (GitHub Actions)\nhttps://github.com/emcconville/wand/actions\n\n\n\nCode Coverage\nhttps://coveralls.io/r/emcconville/wand\n\n\n\n\n\n", "description": "ImageMagick binding for Python."}, {"name": "uvloop", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nPerformance\nInstallation\nUsing uvloop\nBuilding From Source\nLicense\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\nuvloop is a fast, drop-in replacement of the built-in asyncio\nevent loop.  uvloop is implemented in Cython and uses libuv\nunder the hood.\nThe project documentation can be found\nhere.  Please also check out the\nwiki.\n\nPerformance\nuvloop makes asyncio 2-4x faster.\n\nThe above chart shows the performance of an echo server with different\nmessage sizes.  The sockets benchmark uses loop.sock_recv() and\nloop.sock_sendall() methods; the streams benchmark uses asyncio\nhigh-level streams, created by the asyncio.start_server() function;\nand the protocol benchmark uses loop.create_server() with a simple\necho protocol.  Read more about uvloop in a\nblog post\nabout it.\n\nInstallation\nuvloop requires Python 3.7 or greater and is available on PyPI.\nUse pip to install it:\n$ pip install uvloop\n\nNote that it is highly recommended to upgrade pip before installing\nuvloop with:\n$ pip install -U pip\n\n\nUsing uvloop\nimport asyncio\nimport sys\n\nimport uvloop\n\nasync def main():\n    # Main entry-point.\n    ...\n\nif sys.version_info >= (3, 11):\n    with asyncio.Runner(loop_factory=uvloop.new_event_loop) as runner:\n        runner.run(main())\nelse:\n    uvloop.install()\n    asyncio.run(main())\n\nBuilding From Source\nTo build uvloop, you'll need Python 3.7 or greater:\n\nClone the repository:\n$ git clone --recursive git@github.com:MagicStack/uvloop.git\n$ cd uvloop\n\n\nCreate a virtual environment and activate it:\n$ python3.7 -m venv uvloop-dev\n$ source uvloop-dev/bin/activate\n\n\nInstall development dependencies:\n$ pip install -e .[dev]\n\n\nBuild and run tests:\n$ make\n$ make test\n\n\n\n\nLicense\nuvloop is dual-licensed under MIT and Apache 2.0 licenses.\n\n\n", "description": "Drop-in asyncio event loop replacement."}, {"name": "uvicorn", "readme": "\n\n\n\n\nAn ASGI web server, for Python.\n\n\n\n\nDocumentation: https://www.uvicorn.org\nRequirements: Python 3.8+\nUvicorn is an ASGI web server implementation for Python.\nUntil recently Python has lacked a minimal low-level server/application interface for\nasync frameworks. The ASGI specification fills this gap, and means we're now able to\nstart building a common set of tooling usable across all async frameworks.\nUvicorn supports HTTP/1.1 and WebSockets.\nQuickstart\nInstall using pip:\n$ pip install uvicorn\n\nThis will install uvicorn with minimal (pure Python) dependencies.\n$ pip install 'uvicorn[standard]'\n\nThis will install uvicorn with \"Cython-based\" dependencies (where possible) and other \"optional extras\".\nIn this context, \"Cython-based\" means the following:\n\nthe event loop uvloop will be installed and used if possible.\nthe http protocol will be handled by httptools if possible.\n\nMoreover, \"optional extras\" means that:\n\nthe websocket protocol will be handled by websockets (should you want to use wsproto you'd need to install it manually) if possible.\nthe --reload flag in development mode will use watchfiles.\nwindows users will have colorama installed for the colored logs.\npython-dotenv will be installed should you want to use the --env-file option.\nPyYAML will be installed to allow you to provide a .yaml file to --log-config, if desired.\n\nCreate an application, in example.py:\nasync def app(scope, receive, send):\n    assert scope['type'] == 'http'\n\n    await send({\n        'type': 'http.response.start',\n        'status': 200,\n        'headers': [\n            (b'content-type', b'text/plain'),\n        ],\n    })\n    await send({\n        'type': 'http.response.body',\n        'body': b'Hello, world!',\n    })\n\nRun the server:\n$ uvicorn example:app\n\n\nWhy ASGI?\nMost well established Python Web frameworks started out as WSGI-based frameworks.\nWSGI applications are a single, synchronous callable that takes a request and returns a response.\nThis doesn\u2019t allow for long-lived connections, like you get with long-poll HTTP or WebSocket connections,\nwhich WSGI doesn't support well.\nHaving an async concurrency model also allows for options such as lightweight background tasks,\nand can be less of a limiting factor for endpoints that have long periods being blocked on network\nI/O such as dealing with slow HTTP requests.\n\nAlternative ASGI servers\nA strength of the ASGI protocol is that it decouples the server implementation\nfrom the application framework. This allows for an ecosystem of interoperating\nwebservers and application frameworks.\nDaphne\nThe first ASGI server implementation, originally developed to power Django Channels, is the Daphne webserver.\nIt is run widely in production, and supports HTTP/1.1, HTTP/2, and WebSockets.\nAny of the example applications given here can equally well be run using daphne instead.\n$ pip install daphne\n$ daphne app:App\n\nHypercorn\nHypercorn was initially part of the Quart web framework, before\nbeing separated out into a standalone ASGI server.\nHypercorn supports HTTP/1.1, HTTP/2, and WebSockets.\nIt also supports the excellent trio async framework, as an alternative to asyncio.\n$ pip install hypercorn\n$ hypercorn app:App\n\nMangum\nMangum is an adapter for using ASGI applications with AWS Lambda & API Gateway.\n\nUvicorn is BSD licensed code.Designed & crafted with care.\u2014 \ud83e\udd84  \u2014\n", "description": "Lightning-fast ASGI server implementation.", "category": "Web"}, {"name": "ujson", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nUltraJSON\nUsage\nEncoder options\nencode_html_chars\nensure_ascii\nescape_forward_slashes\nindent\nBenchmarks\nTest machine\nVersions\nBuild options\nDebugging symbols\nUJSON_BUILD_NO_STRIP\nUsing an external or system copy of the double-conversion library\nUJSON_BUILD_DC_INCLUDES\nUJSON_BUILD_DC_LIBS\n\n\n\n\n\nREADME.md\n\n\n\n\nUltraJSON\n\n\n\n\n\n\n\nUltraJSON is an ultra fast JSON encoder and decoder written in pure C with bindings for\nPython 3.8+.\nInstall with pip:\npython -m pip install ujson\nUsage\nMay be used as a drop in replacement for most other JSON parsers for Python:\n>>> import ujson\n>>> ujson.dumps([{\"key\": \"value\"}, 81, True])\n'[{\"key\":\"value\"},81,true]'\n>>> ujson.loads(\"\"\"[{\"key\": \"value\"}, 81, true]\"\"\")\n[{'key': 'value'}, 81, True]\nEncoder options\nencode_html_chars\nUsed to enable special encoding of \"unsafe\" HTML characters into safer Unicode\nsequences. Default is False:\n>>> ujson.dumps(\"<script>John&Doe\", encode_html_chars=True)\n'\"\\\\u003cscript\\\\u003eJohn\\\\u0026Doe\"'\nensure_ascii\nLimits output to ASCII and escapes all extended characters above 127. Default is True.\nIf your end format supports UTF-8, setting this option to false is highly recommended to\nsave space:\n>>> ujson.dumps(\"\u00e5\u00e4\u00f6\")\n'\"\\\\u00e5\\\\u00e4\\\\u00f6\"'\n>>> ujson.dumps(\"\u00e5\u00e4\u00f6\", ensure_ascii=False)\n'\"\u00e5\u00e4\u00f6\"'\nescape_forward_slashes\nControls whether forward slashes (/) are escaped. Default is True:\n>>> ujson.dumps(\"http://esn.me\")\n'\"http:\\\\/\\\\/esn.me\"'\n>>> ujson.dumps(\"http://esn.me\", escape_forward_slashes=False)\n'\"http://esn.me\"'\nindent\nControls whether indentation (\"pretty output\") is enabled. Default is 0 (disabled):\n>>> ujson.dumps({\"foo\": \"bar\"})\n'{\"foo\":\"bar\"}'\n>>> print(ujson.dumps({\"foo\": \"bar\"}, indent=4))\n{\n    \"foo\":\"bar\"\n}\nBenchmarks\nUltraJSON calls/sec compared to other popular JSON parsers with performance gain\nspecified below each.\nTest machine\nLinux 5.15.0-1037-azure x86_64 #44-Ubuntu SMP Thu Apr 20 13:19:31 UTC 2023\nVersions\n\nCPython 3.11.3 (main, Apr  6 2023, 07:55:46) [GCC 11.3.0]\nujson        : 5.7.1.dev26\norjson       : 3.9.0\nsimplejson   : 3.19.1\njson         : 2.0.9\n\n\n\n\n\nujson\norjson\nsimplejson\njson\n\n\n\n\nArray with 256 doubles\n\n\n\n\n\n\nencode\n18,282\n79,569\n5,681\n5,935\n\n\ndecode\n28,765\n93,283\n13,844\n13,367\n\n\nArray with 256 UTF-8 strings\n\n\n\n\n\n\nencode\n3,457\n26,437\n3,630\n3,653\n\n\ndecode\n3,576\n4,236\n522\n1,978\n\n\nArray with 256 strings\n\n\n\n\n\n\nencode\n44,769\n125,920\n21,401\n23,565\n\n\ndecode\n28,518\n75,043\n41,496\n42,221\n\n\nMedium complex object\n\n\n\n\n\n\nencode\n11,672\n47,659\n3,913\n5,729\n\n\ndecode\n12,522\n23,599\n8,007\n9,720\n\n\nArray with 256 True values\n\n\n\n\n\n\nencode\n110,444\n425,919\n81,428\n84,347\n\n\ndecode\n203,430\n318,193\n146,867\n156,249\n\n\nArray with 256 dict{string, int} pairs\n\n\n\n\n\n\nencode\n14,170\n72,514\n3,050\n7,079\n\n\ndecode\n19,116\n27,542\n9,374\n13,713\n\n\nDict with 256 arrays with 256 dict{string, int} pairs\n\n\n\n\n\n\nencode\n55\n282\n11\n26\n\n\ndecode\n48\n53\n27\n34\n\n\nDict with 256 arrays with 256 dict{string, int} pairs, outputting sorted keys\n\n\n\n\n\n\nencode\n42\n\n8\n27\n\n\nComplex object\n\n\n\n\n\n\nencode\n462\n\n397\n444\n\n\ndecode\n480\n618\n177\n310\n\n\n\nAbove metrics are in call/sec, larger is better.\nBuild options\nFor those with particular needs, such as Linux distribution packagers, several\nbuild options are provided in the form of environment variables.\nDebugging symbols\nUJSON_BUILD_NO_STRIP\nBy default, debugging symbols are stripped on Linux platforms. Setting this\nenvironment variable with a value of 1 or True disables this behavior.\nUsing an external or system copy of the double-conversion library\nThese two environment variables are typically used together, something like:\nexport UJSON_BUILD_DC_INCLUDES='/usr/include/double-conversion'\nexport UJSON_BUILD_DC_LIBS='-ldouble-conversion'\nUsers planning to link against an external shared library should be aware of\nthe ABI-compatibility requirements this introduces when upgrading system\nlibraries or copying compiled wheels to other machines.\nUJSON_BUILD_DC_INCLUDES\nOne or more directories, delimited by os.pathsep (same as the PATH\nenvironment variable), in which to look for double-conversion header files;\nthe default is to use the bundled copy.\nUJSON_BUILD_DC_LIBS\nCompiler flags needed to link the double-conversion library; the default\nis to use the bundled copy.\n\n\n", "description": "Ultra fast JSON encoder and decoder for Python."}, {"name": "tzlocal", "readme": "\n\ntzlocal\n\nAPI CHANGE!\nWith version 3.0 of tzlocal, tzlocal no longer returned pytz objects, but\nzoneinfo objects, which has a different API. Since 4.0, it now restored\npartial compatibility for pytz users through Paul Ganssle\u2019s\npytz_deprecation_shim.\ntzlocal 4.0 also adds an official function get_localzone_name() to get only\nthe timezone name, instead of a timezone object. On unix, it can raise an\nerror if you don\u2019t have a timezone name configured, where get_localzone()\nwill succeed, so only use that if you need the timezone name.\n4.0 also adds way more information on what is going wrong in your\nconfiguration when the configuration files are unclear or contradictory.\nVersion 5.0 removes the pytz_deprecation_shim, and now only returns\nzoneinfo objects, like verion 3.0 did. If you need pytz objects, you have\nto stay on version 4.0. If there are bugs in version 4.0, I will rekease\nupdates, but there will be no further functional changes on the 4.x branch.\n\n\nInfo\nThis Python module returns a tzinfo object (with a pytz_deprecation_shim,\nfor pytz compatibility) with the local timezone information, under Unix and\nWindows.\nIt requires Python 3.7 or later, and will use the backports.tzinfo\npackage, for Python 3.7 and 3.8.\nThis module attempts to fix a glaring hole in the pytz and zoneinfo\nmodules, that there is no way to get the local timezone information, unless\nyou know the zoneinfo name, and under several Linux distros that\u2019s hard or\nimpossible to figure out.\nWith tzlocal you only need to call get_localzone() and you will get a\ntzinfo object with the local time zone info. On some Unices you will\nstill not get to know what the timezone name is, but you don\u2019t need that when\nyou have the tzinfo file. However, if the timezone name is readily available\nit will be used.\n\n\nSupported systems\nThese are the systems that are in theory supported:\n\n\nWindows 2000 and later\nAny unix-like system with a /etc/localtime or /usr/local/etc/localtime\n\n\nIf you have one of the above systems and it does not work, it\u2019s a bug.\nPlease report it.\nPlease note that if you are getting a time zone called local, this is not\na bug, it\u2019s actually the main feature of tzlocal, that even if your\nsystem does NOT have a configuration file with the zoneinfo name of your time\nzone, it will still work.\nYou can also use tzlocal to get the name of your local timezone, but only\nif your system is configured to make that possible. tzlocal looks for the\ntimezone name in /etc/timezone, /var/db/zoneinfo,\n/etc/sysconfig/clock and /etc/conf.d/clock. If your\n/etc/localtime is a symlink it can also extract the name from that\nsymlink.\nIf you need the name of your local time zone, then please make sure your\nsystem is properly configured to allow that.\nIf your unix system doesn\u2019t have a timezone configured, tzlocal will default\nto UTC.\n\n\nNotes on Docker\nIt turns out that Docker images frequently have broken timezone setups.\nThis usually resuts in a warning that the configuration is wrong, or that\nthe timezone offset doesn\u2019t match the found timezone.\nThe easiest way to fix that is to set a TZ variable in your docker setup\nto whatever timezone you want, which is usually the timezone your host\ncomputer has.\n\n\nUsage\nLoad the local timezone:\n\n>>> from tzlocal import get_localzone\n>>> tz = get_localzone()\n>>> tz\nzoneinfo.ZoneInfo(key='Europe/Warsaw')\n\n\nCreate a local datetime:\n\n>>> from datetime import datetime\n>>> dt = datetime(2015, 4, 10, 7, 22, tzinfo=tz)\n>>> dt\ndatetime.datetime(2015, 4, 10, 7, 22, tzinfo=zoneinfo.ZoneInfo(key='Europe/Warsaw'))\n\n\nLookup another timezone with zoneinfo (backports.zoneinfo on Python 3.8 or earlier):\n\n>>> from zoneinfo import ZoneInfo\n>>> eastern = ZoneInfo('US/Eastern')\n\n\nConvert the datetime:\n\n>>> dt.astimezone(eastern)\ndatetime.datetime(2015, 4, 10, 1, 22, tzinfo=zoneinfo.ZoneInfo(key='US/Eastern'))\n\n\nIf you just want the name of the local timezone, use get_localzone_name():\n\n>>> from tzlocal import get_localzone_name\n>>> get_localzone_name()\n\"Europe/Warsaw\"\n\n\nPlease note that under Unix, get_localzone_name() may fail if there is no zone\nconfigured, where get_localzone() would generally succeed.\n\n\nTroubleshooting\nIf you don\u2019t get the result you expect, try running it with debugging turned on.\nStart a python interpreter that has tzlocal installed, and run the following code:\nimport logging\nlogging.basicConfig(level=\"DEBUG\")\nimport tzlocal\ntzlocal.get_localzone()\nThe output should look something like this, and this will tell you what\nconfigurations were found:\nDEBUG:root:/etc/timezone found, contents:\n Europe/Warsaw\n\nDEBUG:root:/etc/localtime found\nDEBUG:root:2 found:\n {'/etc/timezone': 'Europe/Warsaw', '/etc/localtime is a symlink to': 'Europe/Warsaw'}\nzoneinfo.ZoneInfo(key='Europe/Warsaw')\n\n\nDevelopment\nFor ease of development, there is a Makefile that will help you with basic tasks,\nlike creating a development environment with all the necessary tools (although\nyou need a supported Python version installed first):\n$ make devenv\nTo run tests:\n$ make test\nCheck the syntax:\n$ make check\n\n\nMaintainer\n\nLennart Regebro, regebro@gmail.com\n\n\n\nContributors\n\nMarc Van Olmen\nBenjamen Meyer\nManuel Ebert\nXiaokun Zhu\nCameris\nEdward Betts\nMcK KIM\nCris Ewing\nAyala Shachar\nLev Maximov\nJakub Wilk\nJohn Quarles\nPreston Landers\nVictor Torres\nJean Jordaan\nZackary Welch\nMicka\u00ebl Schoentgen\nGabriel Corona\nAlex Gr\u00f6nholm\nJulin S\nMiroslav \u0160ediv\u00fd\nrevansSZ\nSam Treweek\nPeter Di Pasquale\nRongrong\n\n(Sorry if I forgot someone)\n\n\nLicense\n\nMIT https://opensource.org/licenses/MIT\n\n\n\n\nChanges\n\n5.0.1 (2023-05-15)\n\nThe logging info under windows made it look like it looked up the registry\ninfo even when you had a TZ environment, but it doesn\u2019t actually do that.\nImproved the handling of loggers.\n\n\n\n5.0 (2023-05-14)\n\nFixed a bug in the new assert_tz_offset method.\n\n\n\n5.0b2 (2023-04-11)\n\nChange how the system offset is calculated to deal with non-DST\ntemporary changes, such as Ramadan time in Morocco.\nChange the default to only warn when the timezone offset and system\noffset disagree (but still not even warn if TZ is set)\nAdd the assert_tz_offset() method to the top level for those who want\nto explicitly check and fail.\n\n\n\n5.0b1 (2023-04-07)\n\nRemoved the deprecation shim.\n\n\n\n4.4b1 (2023-03-20)\n\nAdded debug logging\n\n\n\n4.3 (2023-03-18)\n\nImproved the error message when the ZoneInfo cannot be found\nDon\u2019t error out because we find multiple possible timezones for\na symlink.\nMore stable on Android/Termux with proot\n\n\n\n4.2 (2022-04-02)\n\nIf TZ environment variable is set to /etc/localhost, and that\u2019s a link to\na zoneinfo file, then tzlocal will now find the timezone name, and not\njust return a localtime TZ object.\n\n\n\n4.1 (2021-10-29)\n\nNo changes from 4.1b1.\n\n\n\n4.1b1 (2021-10-28)\n\nIt turns out a lot of Linux distributions make the links between zoneinfo\naliases backwards, so instead of linking GB to Europe/London it actually\nlinks the other way. When /etc/localtime then links to Europe/London, and you\nalso have a config file saying Europe/London, the code that checks if\n/etc/localtime is a symlink ends up at GB instead of Europe/London and\nwe get an error, as it thinks GB and Europe/London are different zones.\nSo now we check the symlink of all timezones in the uniqueness test. We still\nreturn the name in the config file, though, so you would only get GB or Zulu\nreturned as the time zone instead of Europe/London or UTC if your only\nconfiguration is the /etc/localtime symlink, as that\u2019s checked last, and\ntzlocal will return the first configuration found.\n\nThe above change also means that GMT and UTC are no longer seen as synonyms,\nas zoneinfo does not see them as synonyms. This might be controversial,\nbut you just have to live with it. Pick one and stay with it. ;-)\n\n\n\n4.0.2 (2021-10-26)\n\nImproved the error message when you had a conflict including a\n/etc/localtime symlink.\n\n\n\n4.0.1 (2021-10-19)\n\nA long time bug in Ubuntu docker images seem to not get fixed,\nso I added a workaround.\n\n\n\n4.0.1b1 (2021-10-18)\n\nHandle UCT and Zulu as synonyms for UTC, while treating GMT and\nUTC as different.\n\n\n\n4.0 (2021-10-18)\n\nNo changes.\n\n\n\n4.0b5 (2021-10-18)\n\nFixed a bug in the Windows DST support.\n\n\n\n4.0b4 (2021-10-18)\n\nAdded support for turning off DST in Windows. That only works in\nwhole hour timezones, and honestly, if you need to turn off DST,\nyou should just use UTC as a timezone.\n\n\n\n4.0b3 (2021-10-08)\n\nReturning pytz_deprecation_shim zones to lower the surprise for pytz users.\nThe Windows OS environment variable \u2018TZ\u2019 will allow an override for\nsetting the timezone. The override timezone will be asserted for\ntimezone validity bit not compared against the systems timezone offset.\nThis allows for overriding the timezone when running tests.\nDropped support for Windows 2000, XP and Vista, supports Windows 7, 8 and 10.\n\n\n\n4.0b2 (2021-09-26)\n\nBig refactor; Implemented get_localzone_name() functions.\nAdding a Windows OS environment variable \u2018TZ\u2019 will allow an override for\nsetting the timezone (also see 4.0b3).\n\n\n\n4.0b1 (2021-08-21)\n\nNow finds and compares all the configs (under Unix-like systems) and\ntells you what files it found and how they conflict. This should make\nit a lot easier to figure out what goes wrong.\n\n\n\n3.0 (2021-08-13)\n\nModernized the packaging, moving to setup.cfg etc.\nHandles if the text config files incorrectly is a TZ file. (revanSZ)\n\n\n\n3.0b1 (2020-09-21)\n\nDropped Python 2 support\nSwitched timezone provider from pytz to zoneinfo (PEP 615)\n\n\n\n2.1 (2020-05-08)\n\nNo changes.\n\n\n\n2.1b1 (2020-02-08)\n\nThe is_dst flag is wrong for Europe/Dublin on some Unix releases.\nI changed to another way of determining if DST is in effect or not.\nAdded support for Python 3.7 and 3.8. Dropped 3.5 although it still works.\n\n\n\n2.0.0 (2019-07-23)\n\nNo differences since 2.0.0b3\n\n\nMajor differences since 1.5.1\n\nWhen no time zone configuration can be find, tzlocal now return UTC.\nThis is a major difference from 1.x, where an exception would be raised.\nThis change is because Docker images often have no configuration at all,\nand the unix utilities will then default to UTC, so we follow that.\nIf tzlocal on Unix finds a timezone name in a /etc config file, then\ntzlocal now verifies that the timezone it fouds has the same offset as\nthe local computer is configured with. If it doesn\u2019t, something is\nconfigured incorrectly. (Victor Torres, regebro)\nGet timezone via Termux getprop wrapper on Android. It\u2019s not officially\nsupported because we can\u2019t test it, but at least we make an effort.\n(Jean Jordaan)\n\n\n\nMinor differences and bug fixes\n\nSkip comment lines when parsing /etc/timezone. (Edward Betts)\nDon\u2019t load timezone from current directory. (Gabriel Corona)\nNow verifies that the config files actually contain something before\nreading them. (Zackary Welch, regebro)\nGot rid of a BytesWarning (Micka\u00ebl Schoentgen)\nNow handles if config file paths exists, but are directories.\nMoved tests out from distributions\nSupport wheels\n\n\n\n\n1.5.1 (2017-12-01)\n\n1.5 had a bug that slipped through testing, fixed that,\nincreased test coverage.\n\n\n\n1.5 (2017-11-30)\n\nNo longer treats macOS as special, but as a unix.\nget_windows_info.py is renamed to update_windows_mappings.py\nWindows mappings now also contain mappings from deprecated zoneinfo names.\n(Preston-Landers, regebro)\n\n\n\n1.4 (2017-04-18)\n\nI use MIT on my other projects, so relicensing.\n\n\n\n1.4b1 (2017-04-14)\n\nDropping support for Python versions nobody uses (2.5, 3.1, 3.2), adding 3.6\nPython 3.1 and 3.2 still works, 2.5 has been broken for some time.\nAyalash\u2019s OS X fix didn\u2019t work on Python 2.7, fixed that.\n\n\n\n1.3.2 (2017-04-12)\n\nEnsure closing of subprocess on OS X (ayalash)\nRemoved unused imports (jwilk)\nCloses stdout and stderr to get rid of ResourceWarnings (johnwquarles)\nUpdated Windows timezones (axil)\n\n\n\n1.3 (2016-10-15)\n\n#34: Added support for /var/db/zoneinfo\n\n\n\n1.2.2 (2016-03-02)\n\n#30: Fixed a bug on OS X.\n\n\n\n1.2.1 (2016-02-28)\n\nTests failed if TZ was set in the environment. (EdwardBetts)\nReplaces os.popen() with subprocess.Popen() for OS X to\nhandle when systemsetup doesn\u2019t exist. (mckabi, cewing)\n\n\n\n1.2 (2015-06-14)\n\nSystemd stores no time zone name, forcing us to look at the name of the file\nthat localtime symlinks to. (cameris)\n\n\n\n1.1.2 (2014-10-18)\n\nTimezones that has 3 items did not work on Mac OS X.\n(Marc Van Olmen)\nNow doesn\u2019t fail if the TZ environment variable isn\u2019t an Olsen time zone.\nSome timezones on Windows can apparently be empty (perhaps the are deleted).\nNow these are ignored.\n(Xiaokun Zhu)\n\n\n\n1.1.1 (2014-01-29)\n\nI forgot to add Etc/UTC as an alias for Etc/GMT.\n\n\n\n1.1 (2014-01-28)\n\nAdding better support for OS X.\nAdded support to map from tzdata/Olsen names to Windows names.\n(Thanks to Benjamen Meyer).\n\n\n\n1.0 (2013-05-29)\n\nFixed some more cases where spaces needs replacing with underscores.\nBetter handling of misconfigured /etc/timezone.\nBetter error message on Windows if we can\u2019t find a timezone at all.\n\n\n\n0.3 (2012-09-13)\n\nWindows 7 support.\nPython 2.5 supported; because it only needed a __future__ import.\nPython 3.3 tested, it worked.\nGot rid of relative imports, because I don\u2019t actually like them,\nso I don\u2019t know why I used them in the first place.\nFor each Windows zone, use the default zoneinfo zone, not the last one.\n\n\n\n0.2 (2012-09-12)\n\nPython 3 support.\n\n\n\n0.1 (2012-09-11)\n\nInitial release.\n\n\n\n", "description": "Returns tzinfo object with local timezone information."}, {"name": "typing-extensions", "readme": "\nTyping Extensions\n\nDocumentation \u2013\nPyPI\nOverview\nThe typing_extensions module serves two related purposes:\n\nEnable use of new type system features on older Python versions. For example,\ntyping.TypeGuard is new in Python 3.10, but typing_extensions allows\nusers on previous Python versions to use it too.\nEnable experimentation with new type system PEPs before they are accepted and\nadded to the typing module.\n\ntyping_extensions is treated specially by static type checkers such as\nmypy and pyright. Objects defined in typing_extensions are treated the same\nway as equivalent forms in typing.\ntyping_extensions uses\nSemantic Versioning. The\nmajor version will be incremented only for backwards-incompatible changes.\nTherefore, it's safe to depend\non typing_extensions like this: typing_extensions >=x.y, <(x+1),\nwhere x.y is the first version that includes all features you need.\ntyping_extensions supports Python versions 3.7 and higher.\nIncluded items\nSee the documentation for a\ncomplete listing of module contents.\nContributing\nSee CONTRIBUTING.md\nfor how to contribute to typing_extensions.\n"}, {"name": "typer", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFastAPI of CLIs\nRequirements\nInstallation\nExample\nThe absolute minimum\nRun it\nExample upgrade\nAn example with two subcommands\nRun the upgraded example\nRecap\nOptional Dependencies\nLicense\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\n\n\nTyper, build great CLIs. Easy to code. Based on Python type hints.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nDocumentation: https://typer.tiangolo.com\nSource Code: https://github.com/tiangolo/typer\n\nTyper is a library for building CLI applications that users will love using and developers will love creating. Based on Python 3.6+ type hints.\nThe key features are:\n\nIntuitive to write: Great editor support. Completion everywhere. Less time debugging. Designed to be easy to use and learn. Less time reading docs.\nEasy to use: It's easy to use for the final users. Automatic help, and automatic completion for all shells.\nShort: Minimize code duplication. Multiple features from each parameter declaration. Fewer bugs.\nStart simple: The simplest example adds only 2 lines of code to your app: 1 import, 1 function call.\nGrow large: Grow in complexity as much as you want, create arbitrarily complex trees of commands and groups of subcommands, with options and arguments.\n\nFastAPI of CLIs\n\nTyper is FastAPI's little sibling.\nAnd it's intended to be the FastAPI of CLIs.\nRequirements\nPython 3.6+\nTyper stands on the shoulders of a giant. Its only internal dependency is Click.\nInstallation\n\n$ pip install \"typer[all]\"\n---> 100%\nSuccessfully installed typer\n\nNote: that will include Rich. Rich is the recommended library to display information on the terminal, it is optional, but when installed, it's deeply integrated into Typer to display beautiful output.\nExample\nThe absolute minimum\n\nCreate a file main.py with:\n\nimport typer\n\n\ndef main(name: str):\n    print(f\"Hello {name}\")\n\n\nif __name__ == \"__main__\":\n    typer.run(main)\nRun it\nRun your application:\n\n// Run your application\n$ python main.py\n\n// You get a nice error, you are missing NAME\nUsage: main.py [OPTIONS] NAME\nTry 'main.py --help' for help.\n\u256d\u2500 Error \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Missing argument 'NAME'.                          \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\n\n// You get a --help for free\n$ python main.py --help\n\nUsage: main.py [OPTIONS] NAME\n\n\u256d\u2500 Arguments \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 *    name      TEXT  [default: None] [required]   |\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\u256d\u2500 Options \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 --help          Show this message and exit.       \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\n// Now pass the NAME argument\n$ python main.py Camila\n\nHello Camila\n\n// It works! \ud83c\udf89\n\nNote: auto-completion works when you create a Python package and run it with --install-completion or when you use Typer CLI.\nExample upgrade\nThis was the simplest example possible.\nNow let's see one a bit more complex.\nAn example with two subcommands\nModify the file main.py.\nCreate a typer.Typer() app, and create two subcommands with their parameters.\nimport typer\n\napp = typer.Typer()\n\n\n@app.command()\ndef hello(name: str):\n    print(f\"Hello {name}\")\n\n\n@app.command()\ndef goodbye(name: str, formal: bool = False):\n    if formal:\n        print(f\"Goodbye Ms. {name}. Have a good day.\")\n    else:\n        print(f\"Bye {name}!\")\n\n\nif __name__ == \"__main__\":\n    app()\nAnd that will:\n\nExplicitly create a typer.Typer app.\n\nThe previous typer.run actually creates one implicitly for you.\n\n\nAdd two subcommands with @app.command().\nExecute the app() itself, as if it was a function (instead of typer.run).\n\nRun the upgraded example\nCheck the new help:\n\n$ python main.py --help\n\n Usage: main.py [OPTIONS] COMMAND [ARGS]...\n\n\u256d\u2500 Options \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 --install-completion          Install completion  \u2502\n\u2502                               for the current     \u2502\n\u2502                               shell.              \u2502\n\u2502 --show-completion             Show completion for \u2502\n\u2502                               the current shell,  \u2502\n\u2502                               to copy it or       \u2502\n\u2502                               customize the       \u2502\n\u2502                               installation.       \u2502\n\u2502 --help                        Show this message   \u2502\n\u2502                               and exit.           \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\u256d\u2500 Commands \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 goodbye                                           \u2502\n\u2502 hello                                             \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\n// When you create a package you get \u2728 auto-completion \u2728 for free, installed with --install-completion\n\n// You have 2 subcommands (the 2 functions): goodbye and hello\n\nNow check the help for the hello command:\n\n$ python main.py hello --help\n\n Usage: main.py hello [OPTIONS] NAME\n\n\u256d\u2500 Arguments \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 *    name      TEXT  [default: None] [required]   \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\u256d\u2500 Options \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 --help          Show this message and exit.       \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\nAnd now check the help for the goodbye command:\n\n$ python main.py goodbye --help\n\n Usage: main.py goodbye [OPTIONS] NAME\n\n\u256d\u2500 Arguments \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 *    name      TEXT  [default: None] [required]   \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\u256d\u2500 Options \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 --formal    --no-formal      [default: no-formal] \u2502\n\u2502 --help                       Show this message    \u2502\n\u2502                              and exit.            \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\n// Automatic --formal and --no-formal for the bool option \ud83c\udf89\n\nNow you can try out the new command line application:\n\n// Use it with the hello command\n\n$ python main.py hello Camila\n\nHello Camila\n\n// And with the goodbye command\n\n$ python main.py goodbye Camila\n\nBye Camila!\n\n// And with --formal\n\n$ python main.py goodbye --formal Camila\n\nGoodbye Ms. Camila. Have a good day.\n\nRecap\nIn summary, you declare once the types of parameters (CLI arguments and CLI options) as function parameters.\nYou do that with standard modern Python types.\nYou don't have to learn a new syntax, the methods or classes of a specific library, etc.\nJust standard Python 3.6+.\nFor example, for an int:\ntotal: int\nor for a bool flag:\nforce: bool\nAnd similarly for files, paths, enums (choices), etc. And there are tools to create groups of subcommands, add metadata, extra validation, etc.\nYou get: great editor support, including completion and type checks everywhere.\nYour users get: automatic --help, auto-completion in their terminal (Bash, Zsh, Fish, PowerShell) when they install your package or when using Typer CLI.\nFor a more complete example including more features, see the Tutorial - User Guide.\nOptional Dependencies\nTyper uses Click internally. That's the only dependency.\nBut you can also install extras:\n\nrich: and Typer will show nicely formatted errors automatically.\nshellingham: and Typer will automatically detect the current shell when installing completion.\n\nWith shellingham you can just use --install-completion.\nWithout shellingham, you have to pass the name of the shell to install completion for, e.g. --install-completion bash.\n\n\n\nYou can install typer with rich and shellingham with pip install typer[all].\nLicense\nThis project is licensed under the terms of the MIT license.\n\n\n", "description": "Build command-line interfaces using Python type hints.", "category": "CLI"}, {"name": "trimesh", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nBasic Installation\nQuick Start\nFeatures\nViewer\nProjects Using Trimesh\nWhich Mesh Format Should I Use?\nHow can I cite this library?\nContainers\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\n   \nTrimesh is a pure Python (2.7-3.5+) library for loading and using triangular meshes with an emphasis on watertight surfaces. The goal of the library is to provide a full featured and well tested Trimesh object which allows for easy manipulation and analysis, in the style of the Polygon object in the Shapely library.\nThe API is mostly stable, but this should not be relied on and is not guaranteed: install a specific version if you plan on deploying something using trimesh.\nPull requests are appreciated and responded to promptly! If you'd like to contribute, here is an up to date list of potential enhancements although things not on that list are also welcome. Here's a quick development and contributing guide.\nBasic Installation\nKeeping trimesh easy to install is a core goal, thus the only hard dependency is numpy. Installing other packages adds functionality but is not required. For the easiest install with just numpy, pip can generally install trimesh cleanly on Windows, Linux, and OSX:\npip install trimesh\nThe minimal install can load many supported formats (STL, PLY, GLTF/GLB) into numpy arrays. More functionality is available when soft dependencies are installed. This includes things like convex hulls (scipy), graph operations (networkx), faster ray queries (pyembree), vector path handling (shapely and rtree), XML formats like 3DXML/XAML/3MF (lxml), preview windows (pyglet), faster cache checks (xxhash), etc. To install trimesh with the soft dependencies that generally install cleanly on Linux, OSX, and Windows using pip:\npip install trimesh[easy]\nFurther information is available in the advanced installation documentation.\nQuick Start\nHere is an example of loading a mesh from file and colorizing its faces. Here is a nicely formatted\nipython notebook version of this example. Also check out the cross section example.\nimport numpy as np\nimport trimesh\n\n# attach to logger so trimesh messages will be printed to console\ntrimesh.util.attach_to_log()\n\n# mesh objects can be created from existing faces and vertex data\nmesh = trimesh.Trimesh(vertices=[[0, 0, 0], [0, 0, 1], [0, 1, 0]],\n                       faces=[[0, 1, 2]])\n\n# by default, Trimesh will do a light processing, which will\n# remove any NaN values and merge vertices that share position\n# if you want to not do this on load, you can pass `process=False`\nmesh = trimesh.Trimesh(vertices=[[0, 0, 0], [0, 0, 1], [0, 1, 0]],\n                       faces=[[0, 1, 2]],\n                       process=False)\n\n# some formats represent multiple meshes with multiple instances\n# the loader tries to return the datatype which makes the most sense\n# which will for scene-like files will return a `trimesh.Scene` object.\n# if you *always* want a straight `trimesh.Trimesh` you can ask the\n# loader to \"force\" the result into a mesh through concatenation\nmesh = trimesh.load('models/CesiumMilkTruck.glb', force='mesh')\n\n# mesh objects can be loaded from a file name or from a buffer\n# you can pass any of the kwargs for the `Trimesh` constructor\n# to `trimesh.load`, including `process=False` if you would like\n# to preserve the original loaded data without merging vertices\n# STL files will be a soup of disconnected triangles without\n# merging vertices however and will not register as watertight\nmesh = trimesh.load('../models/featuretype.STL')\n\n# is the current mesh watertight?\nmesh.is_watertight\n\n# what's the euler number for the mesh?\nmesh.euler_number\n\n# the convex hull is another Trimesh object that is available as a property\n# lets compare the volume of our mesh with the volume of its convex hull\nprint(mesh.volume / mesh.convex_hull.volume)\n\n# since the mesh is watertight, it means there is a\n# volumetric center of mass which we can set as the origin for our mesh\nmesh.vertices -= mesh.center_mass\n\n# what's the moment of inertia for the mesh?\nmesh.moment_inertia\n\n# if there are multiple bodies in the mesh we can split the mesh by\n# connected components of face adjacency\n# since this example mesh is a single watertight body we get a list of one mesh\nmesh.split()\n\n# facets are groups of coplanar adjacent faces\n# set each facet to a random color\n# colors are 8 bit RGBA by default (n, 4) np.uint8\nfor facet in mesh.facets:\n    mesh.visual.face_colors[facet] = trimesh.visual.random_color()\n\n# preview mesh in an opengl window if you installed pyglet and scipy with pip\nmesh.show()\n\n# transform method can be passed a (4, 4) matrix and will cleanly apply the transform\nmesh.apply_transform(trimesh.transformations.random_rotation_matrix())\n\n# axis aligned bounding box is available\nmesh.bounding_box.extents\n\n# a minimum volume oriented bounding box also available\n# primitives are subclasses of Trimesh objects which automatically generate\n# faces and vertices from data stored in the 'primitive' attribute\nmesh.bounding_box_oriented.primitive.extents\nmesh.bounding_box_oriented.primitive.transform\n\n# show the mesh appended with its oriented bounding box\n# the bounding box is a trimesh.primitives.Box object, which subclasses\n# Trimesh and lazily evaluates to fill in vertices and faces when requested\n# (press w in viewer to see triangles)\n(mesh + mesh.bounding_box_oriented).show()\n\n# bounding spheres and bounding cylinders of meshes are also\n# available, and will be the minimum volume version of each\n# except in certain degenerate cases, where they will be no worse\n# than a least squares fit version of the primitive.\nprint(mesh.bounding_box_oriented.volume,\n      mesh.bounding_cylinder.volume,\n      mesh.bounding_sphere.volume)\nFeatures\n\nImport meshes from binary/ASCII STL, Wavefront OBJ, ASCII OFF, binary/ASCII PLY, GLTF/GLB 2.0, 3MF, XAML, 3DXML, etc.\nImport and export 2D or 3D vector paths from/to DXF or SVG files\nImport geometry files using the GMSH SDK if installed (BREP, STEP, IGES, INP, BDF, etc)\nExport meshes as binary STL, binary PLY, ASCII OFF, OBJ, GLTF/GLB 2.0, COLLADA, etc.\nExport meshes using the GMSH SDK if installed (Abaqus INP, Nastran BDF, etc)\nPreview meshes using pyglet or in- line in jupyter notebooks using three.js\nAutomatic hashing of numpy arrays for change tracking using MD5, zlib CRC, or xxhash\nInternal caching of computed values validated from hashes\nCalculate face adjacencies, face angles, vertex defects, etc.\nCalculate cross sections, i.e. the slicing operation used in 3D printing\nSlice meshes with one or multiple arbitrary planes and return the resulting surface\nSplit mesh based on face connectivity using networkx, graph-tool, or scipy.sparse\nCalculate mass properties, including volume, center of mass, moment of inertia, principal components of inertia vectors and components\nRepair simple problems with triangle winding, normals, and quad/tri holes\nConvex hulls of meshes\nCompute rotation/translation/tessellation invariant identifier and find duplicate meshes\nDetermine if a mesh is watertight, convex, etc.\nUniformly sample the surface of a mesh\nRay-mesh queries including location, triangle index, etc.\nBoolean operations on meshes (intersection, union, difference) using OpenSCAD or Blender as a back end. Note that mesh booleans in general are usually slow and unreliable\nVoxelize watertight meshes\nVolume mesh generation (TETgen) using Gmsh SDK\nSmooth watertight meshes using laplacian smoothing algorithms (Classic, Taubin, Humphrey)\nSubdivide faces of a mesh\nApproximate minimum volume oriented bounding boxes for meshes\nApproximate minimum volume bounding spheres\nCalculate nearest point on mesh surface and signed distance\nDetermine if a point lies inside or outside of a well constructed mesh using signed distance\nPrimitive objects (Box, Cylinder, Sphere, Extrusion) which are subclassed Trimesh objects and have all the same features (inertia, viewers, etc)\nSimple scene graph and transform tree which can be rendered (pyglet window, three.js in a jupyter notebook, pyrender) or exported.\nMany utility functions, like transforming points, unitizing vectors, aligning vectors, tracking numpy arrays for changes, grouping rows, etc.\n\nViewer\nTrimesh includes an optional pyglet based viewer for debugging and inspecting. In the mesh view window, opened with mesh.show(), the following commands can be used:\n\nmouse click + drag rotates the view\nctl + mouse click + drag pans the view\nmouse wheel zooms\nz returns to the base view\nw toggles wireframe mode\nc toggles backface culling\ng toggles an XY grid with Z set to lowest point\na toggles an XYZ-RGB axis marker between: off, at world frame, or at every frame and world, and at every frame\nf toggles between fullscreen and windowed mode\nm maximizes the window\nq closes the window\n\nIf called from inside a jupyter notebook, mesh.show() displays an in-line preview using three.js to display the mesh or scene. For more complete rendering (PBR, better lighting, shaders, better off-screen support, etc) pyrender is designed to interoperate with trimesh objects.\nProjects Using Trimesh\nYou can check out the Github network for things using trimesh. A select few:\n\nNvidia's kaolin for deep learning on 3D geometry.\nCura, a popular slicer for 3D printing.\nBerkeley's DexNet4 and related ambidextrous.ai work with robotic grasp planning and manipulation.\nKerfed's Kerfed's Engine for analyzing assembly geometry for manufacturing.\nMyMiniFactory's P2Slice for preparing models for 3D printing.\npyrender A library to render scenes from Python using nice looking PBR materials.\nurdfpy Load URDF robot descriptions in Python.\nmoderngl-window A helper to create GL contexts and load meshes.\nvedo Visualize meshes interactively (see example gallery).\nFSLeyes View MRI images and brain data.\n\nWhich Mesh Format Should I Use?\nQuick recommendation: GLB or PLY. Every time you replace OBJ with GLB an angel gets its wings.\nIf you want things like by-index faces, instancing, colors, textures, etc, GLB is a terrific choice. GLTF/GLB is an extremely well specified modern format that is easy and fast to parse: it has a JSON header describing data in a binary blob. It has a simple hierarchical scene graph, a great looking modern physically based material system, support in dozens-to-hundreds of libraries, and a John Carmack endorsment. Note that GLTF is a large specification, and trimesh only supports a subset of features: loading basic geometry is supported, NOT supported are fancier things like animations, skeletons, etc.\nIn the wild, STL is perhaps the most common format. STL files are extremely simple: it is basically just a list of triangles. They are robust and are a good choice for basic geometry. Binary PLY files are a good step up, as they support indexed faces and colors.\nWavefront OBJ is also pretty common: unfortunately OBJ doesn't have a widely accepted specification so every importer and exporter implements things slightly differently, making it tough to support. It also allows unfortunate things like arbitrary sized polygons, has a face representation which is easy to mess up, references other files for materials and textures, arbitrarily interleaves data, and is slow to parse. Give GLB or PLY a try as an alternative!\nHow can I cite this library?\nA question that comes up pretty frequently is how to cite the library. A quick BibTex recommendation:\n@software{trimesh,\n\tauthor = {{Dawson-Haggerty et al.}},\n\ttitle = {trimesh},\n\turl = {https://trimsh.org/},\n\tversion = {3.2.0},\n\tdate = {2019-12-8},\n}\n\nContainers\nIf you want to deploy something in a container that uses trimesh automated debian:slim-bullseye based builds with trimesh and most dependencies are available on Docker Hub with image tags for latest, git short hash for the commit in main (i.e. trimesh/trimesh:0c1298d), and version (i.e. trimesh/trimesh:3.5.27):\ndocker pull trimesh/trimesh\nHere's an example of how to render meshes using LLVMpipe and XVFB inside a container.\n\n\n", "description": "Load, process, and render triangular meshes"}, {"name": "traitlets", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nTraitlets\nInstallation\nRunning the tests\nCode Styling\nUsage\nDynamic default values\nCallbacks when a trait attribute changes\nValidation and coercion\nAbout the IPython Development Team\nOur Copyright Policy\n\n\n\n\n\nREADME.md\n\n\n\n\nTraitlets\n\n\n\n\n\n\n\n\n\n\n\n\nhome\nhttps://github.com/ipython/traitlets\n\n\npypi-repo\nhttps://pypi.org/project/traitlets/\n\n\ndocs\nhttps://traitlets.readthedocs.io/\n\n\nlicense\nModified BSD License\n\n\n\nTraitlets is a pure Python library enabling:\n\nthe enforcement of strong typing for attributes of Python objects\n(typed attributes are called \"traits\");\ndynamically calculated default values;\nautomatic validation and coercion of trait attributes when attempting a\nchange;\nregistering for receiving notifications when trait values change;\nreading configuring values from files or from command line\narguments - a distinct layer on top of traitlets, so you may use\ntraitlets without the configuration machinery.\n\nIts implementation relies on the descriptor\npattern, and it is a lightweight pure-python alternative of the\ntraits library.\nTraitlets powers the configuration system of IPython and Jupyter\nand the declarative API of IPython interactive widgets.\nInstallation\nFor a local installation, make sure you have\npip installed and run:\npip install traitlets\nFor a development installation, clone this repository, change into the\ntraitlets root directory, and run pip:\ngit clone https://github.com/ipython/traitlets.git\ncd traitlets\npip install -e .\nRunning the tests\npip install \"traitlets[test]\"\npy.test traitlets\nCode Styling\ntraitlets has adopted automatic code formatting so you shouldn't\nneed to worry too much about your code style.\nAs long as your code is valid,\nthe pre-commit hook should take care of how it should look.\nTo install pre-commit locally, run the following::\npip install pre-commit\npre-commit install\n\nYou can invoke the pre-commit hook by hand at any time with::\npre-commit run\n\nwhich should run any autoformatting on your code\nand tell you about any errors it couldn't fix automatically.\nYou may also install black integration\ninto your text editor to format code automatically.\nIf you have already committed files before setting up the pre-commit\nhook with pre-commit install, you can fix everything up using\npre-commit run --all-files. You need to make the fixing commit\nyourself after that.\nSome of the hooks only run on CI by default, but you can invoke them by\nrunning with the --hook-stage manual argument.\nUsage\nAny class with trait attributes must inherit from HasTraits.\nFor the list of available trait types and their properties, see the\nTrait Types\nsection of the documentation.\nDynamic default values\nTo calculate a default value dynamically, decorate a method of your class with\n@default({traitname}). This method will be called on the instance, and\nshould return the default value. In this example, the _username_default\nmethod is decorated with @default('username'):\nimport getpass\nfrom traitlets import HasTraits, Unicode, default\n\nclass Identity(HasTraits):\n    username = Unicode()\n\n    @default('username')\n    def _username_default(self):\n        return getpass.getuser()\nCallbacks when a trait attribute changes\nWhen a trait changes, an application can follow this trait change with\nadditional actions.\nTo do something when a trait attribute is changed, decorate a method with\ntraitlets.observe().\nThe method will be called with a single argument, a dictionary which contains\nan owner, new value, old value, name of the changed trait, and the event type.\nIn this example, the _num_changed method is decorated with @observe(`num`):\nfrom traitlets import HasTraits, Integer, observe\n\nclass TraitletsExample(HasTraits):\n    num = Integer(5, help=\"a number\").tag(config=True)\n\n    @observe('num')\n    def _num_changed(self, change):\n        print(\"{name} changed from {old} to {new}\".format(**change))\nand is passed the following dictionary when called:\n{\n  'owner': object,  # The HasTraits instance\n  'new': 6,         # The new value\n  'old': 5,         # The old value\n  'name': \"foo\",    # The name of the changed trait\n  'type': 'change', # The event type of the notification, usually 'change'\n}\nValidation and coercion\nEach trait type (Int, Unicode, Dict etc.) may have its own validation or\ncoercion logic. In addition, we can register custom cross-validators\nthat may depend on the state of other attributes. For example:\nfrom traitlets import HasTraits, TraitError, Int, Bool, validate\n\nclass Parity(HasTraits):\n    value = Int()\n    parity = Int()\n\n    @validate('value')\n    def _valid_value(self, proposal):\n        if proposal['value'] % 2 != self.parity:\n            raise TraitError('value and parity should be consistent')\n        return proposal['value']\n\n    @validate('parity')\n    def _valid_parity(self, proposal):\n        parity = proposal['value']\n        if parity not in [0, 1]:\n            raise TraitError('parity should be 0 or 1')\n        if self.value % 2 != parity:\n            raise TraitError('value and parity should be consistent')\n        return proposal['value']\n\nparity_check = Parity(value=2)\n\n# Changing required parity and value together while holding cross validation\nwith parity_check.hold_trait_notifications():\n    parity_check.value = 1\n    parity_check.parity = 1\nHowever, we recommend that custom cross-validators don't modify the state\nof the HasTraits instance.\nAbout the IPython Development Team\nThe IPython Development Team is the set of all contributors to the IPython project.\nThis includes all of the IPython subprojects.\nThe core team that coordinates development on GitHub can be found here:\nhttps://github.com/jupyter/.\nOur Copyright Policy\nIPython uses a shared copyright model. Each contributor maintains copyright\nover their contributions to IPython. But, it is important to note that these\ncontributions are typically only changes to the repositories. Thus, the IPython\nsource code, in its entirety is not the copyright of any single person or\ninstitution. Instead, it is the collective copyright of the entire IPython\nDevelopment Team. If individual contributors want to maintain a record of what\nchanges/contributions they have specific copyright on, they should indicate\ntheir copyright in the commit message of the change, when they commit the\nchange to one of the IPython repositories.\nWith this in mind, the following banner should be used in any source code file\nto indicate the copyright and license terms:\n# Copyright (c) IPython Development Team.\n# Distributed under the terms of the Modified BSD License.\n\n\n\n", "description": "Configuration system for Python applications."}, {"name": "tqdm", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ntqdm\nInstallation\nLatest PyPI stable release\nLatest development release on GitHub\nLatest Conda release\nLatest Snapcraft release\nLatest Docker release\nOther\nChangelog\nUsage\nIterable-based\nManual\nModule\nFAQ and Known Issues\nDocumentation\nParameters\nExtra CLI Options\nReturns\nConvenience Functions\nSubmodules\ncontrib\nExamples and Advanced Usage\nDescription and additional stats\nNested progress bars\nHooks and callbacks\nasyncio\nPandas Integration\nKeras Integration\nDask Integration\nIPython/Jupyter Integration\nCustom Integration\nDynamic Monitor/Meter\nWriting messages\nRedirecting writing\nRedirecting logging\nMonitoring thread, intervals and miniters\nMerch\nContributions\nPorts to Other Languages\nLICENCE\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\ntqdm\n \n   \n  \n   \n\n  \n \n\ntqdm derives from the Arabic word taqaddum (\u062a\u0642\u062f\u0651\u0645) which can mean \"progress,\"\nand is an abbreviation for \"I love you so much\" in Spanish (te quiero demasiado).\nInstantly make your loops show a smart progress meter - just wrap any\niterable with tqdm(iterable), and you're done!\nfrom tqdm import tqdm\nfor i in tqdm(range(10000)):\n    ...\n76%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 | 7568/10000 [00:33<00:10, 229.00it/s]\ntrange(N) can be also used as a convenient shortcut for\ntqdm(range(N)).\n\n\n  \n\nIt can also be executed as a module with pipes:\n$ seq 9999999 | tqdm --bytes | wc -l\n75.2MB [00:00, 217MB/s]\n9999999\n\n$ tar -zcf - docs/ | tqdm --bytes --total `du -sb docs/ | cut -f1` \\\n    > backup.tgz\n 32%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258d                      | 8.89G/27.9G [00:42<01:31, 223MB/s]\nOverhead is low -- about 60ns per iteration (80ns with tqdm.gui), and is\nunit tested against performance regression.\nBy comparison, the well-established\nProgressBar has\nan 800ns/iter overhead.\nIn addition to its low overhead, tqdm uses smart algorithms to predict\nthe remaining time and to skip unnecessary iteration displays, which allows\nfor a negligible overhead in most cases.\ntqdm works on any platform\n(Linux, Windows, Mac, FreeBSD, NetBSD, Solaris/SunOS),\nin any console or in a GUI, and is also friendly with IPython/Jupyter notebooks.\ntqdm does not require any dependencies (not even curses!), just\nPython and an environment supporting carriage return \\r and\nline feed \\n control characters.\n\n\nTable of contents\n\nInstallation\nLatest PyPI stable release\nLatest development release on GitHub\nLatest Conda release\nLatest Snapcraft release\nLatest Docker release\nOther\n\n\nChangelog\nUsage\nIterable-based\nManual\nModule\n\n\nFAQ and Known Issues\nDocumentation\nParameters\nExtra CLI Options\nReturns\nConvenience Functions\nSubmodules\ncontrib\n\n\n\n\nExamples and Advanced Usage\nDescription and additional stats\nNested progress bars\nHooks and callbacks\nasyncio\nPandas Integration\nKeras Integration\nDask Integration\nIPython/Jupyter Integration\nCustom Integration\nDynamic Monitor/Meter\nWriting messages\nRedirecting writing\nRedirecting logging\nMonitoring thread, intervals and miniters\n\n\nMerch\nContributions\nPorts to Other Languages\n\n\nLICENCE\n\n\n\nInstallation\n\nLatest PyPI stable release\n\n  \npip install tqdm\n\nLatest development release on GitHub\n    \nPull and install pre-release devel branch:\npip install \"git+https://github.com/tqdm/tqdm.git@devel#egg=tqdm\"\n\nLatest Conda release\n\nconda install -c conda-forge tqdm\n\nLatest Snapcraft release\n\nThere are 3 channels to choose from:\nsnap install tqdm  # implies --stable, i.e. latest tagged release\nsnap install tqdm  --candidate  # master branch\nsnap install tqdm  --edge  # devel branch\nNote that snap binaries are purely for CLI use (not import-able), and\nautomatically set up bash tab-completion.\n\nLatest Docker release\n\ndocker pull tqdm/tqdm\ndocker run -i --rm tqdm/tqdm --help\n\nOther\nThere are other (unofficial) places where tqdm may be downloaded, particularly for CLI use:\n\n\n\nChangelog\nThe list of all changes is available either on GitHub's Releases:\n, on the\nwiki, or on the\nwebsite.\n\nUsage\ntqdm is very versatile and can be used in a number of ways.\nThe three main ones are given below.\n\nIterable-based\nWrap tqdm() around any iterable:\nfrom tqdm import tqdm\nfrom time import sleep\n\ntext = \"\"\nfor char in tqdm([\"a\", \"b\", \"c\", \"d\"]):\n    sleep(0.25)\n    text = text + char\ntrange(i) is a special optimised instance of tqdm(range(i)):\nfrom tqdm import trange\n\nfor i in trange(100):\n    sleep(0.01)\nInstantiation outside of the loop allows for manual control over tqdm():\npbar = tqdm([\"a\", \"b\", \"c\", \"d\"])\nfor char in pbar:\n    sleep(0.25)\n    pbar.set_description(\"Processing %s\" % char)\n\nManual\nManual control of tqdm() updates using a with statement:\nwith tqdm(total=100) as pbar:\n    for i in range(10):\n        sleep(0.1)\n        pbar.update(10)\nIf the optional variable total (or an iterable with len()) is\nprovided, predictive stats are displayed.\nwith is also optional (you can just assign tqdm() to a variable,\nbut in this case don't forget to del or close() at the end:\npbar = tqdm(total=100)\nfor i in range(10):\n    sleep(0.1)\n    pbar.update(10)\npbar.close()\n\nModule\nPerhaps the most wonderful use of tqdm is in a script or on the command\nline. Simply inserting tqdm (or python -m tqdm) between pipes will pass\nthrough all stdin to stdout while printing progress to stderr.\nThe example below demonstrate counting the number of lines in all Python files\nin the current directory, with timing information included.\n$ time find . -name '*.py' -type f -exec cat \\{} \\; | wc -l\n857365\n\nreal    0m3.458s\nuser    0m0.274s\nsys     0m3.325s\n\n$ time find . -name '*.py' -type f -exec cat \\{} \\; | tqdm | wc -l\n857366it [00:03, 246471.31it/s]\n857365\n\nreal    0m3.585s\nuser    0m0.862s\nsys     0m3.358s\nNote that the usual arguments for tqdm can also be specified.\n$ find . -name '*.py' -type f -exec cat \\{} \\; |\n    tqdm --unit loc --unit_scale --total 857366 >> /dev/null\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 857K/857K [00:04<00:00, 246Kloc/s]\nBacking up a large directory?\n$ tar -zcf - docs/ | tqdm --bytes --total `du -sb docs/ | cut -f1` \\\n  > backup.tgz\n 44%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258a                   | 153M/352M [00:14<00:18, 11.0MB/s]\nThis can be beautified further:\n$ BYTES=$(du -sb docs/ | cut -f1)\n$ tar -cf - docs/ \\\n  | tqdm --bytes --total \"$BYTES\" --desc Processing | gzip \\\n  | tqdm --bytes --total \"$BYTES\" --desc Compressed --position 1 \\\n  > ~/backup.tgz\nProcessing: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 352M/352M [00:14<00:00, 30.2MB/s]\nCompressed:  42%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258e            | 148M/352M [00:14<00:19, 10.9MB/s]\nOr done on a file level using 7-zip:\n$ 7z a -bd -r backup.7z docs/ | grep Compressing \\\n  | tqdm --total $(find docs/ -type f | wc -l) --unit files \\\n  | grep -v Compressing\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 15327/15327 [01:00<00:00, 712.96files/s]\nPre-existing CLI programs already outputting basic progress information will\nbenefit from tqdm's --update and --update_to flags:\n$ seq 3 0.1 5 | tqdm --total 5 --update_to --null\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5.0/5 [00:00<00:00, 9673.21it/s]\n$ seq 10 | tqdm --update --null  # 1 + 2 + ... + 10 = 55 iterations\n55it [00:00, 90006.52it/s]\n\nFAQ and Known Issues\n\nThe most common issues relate to excessive output on multiple lines, instead\nof a neat one-line progress bar.\n\nConsoles in general: require support for carriage return (CR, \\r).\nSome cloud logging consoles which don't support \\r properly\n(cloudwatch,\nK8s) may benefit from\nexport TQDM_POSITION=-1.\n\n\nNested progress bars:\nConsoles in general: require support for moving cursors up to the\nprevious line. For example,\nIDLE,\nConEmu and\nPyCharm (also\nhere,\nhere, and\nhere)\nlack full support.\nWindows: additionally may require the Python module colorama\nto ensure nested bars stay within their respective lines.\n\n\nUnicode:\nEnvironments which report that they support unicode will have solid smooth\nprogressbars. The fallback is an ascii-only bar.\nWindows consoles often only partially support unicode and thus\noften require explicit ascii=True\n(also here). This is due to\neither normal-width unicode characters being incorrectly displayed as\n\"wide\", or some unicode characters not rendering.\n\n\nWrapping generators:\nGenerator wrapper functions tend to hide the length of iterables.\ntqdm does not.\nReplace tqdm(enumerate(...)) with enumerate(tqdm(...)) or\ntqdm(enumerate(x), total=len(x), ...).\nThe same applies to numpy.ndenumerate.\nReplace tqdm(zip(a, b)) with zip(tqdm(a), b) or even\nzip(tqdm(a), tqdm(b)).\nThe same applies to itertools.\nSome useful convenience functions can be found under tqdm.contrib.\n\n\nNo intermediate output in docker-compose:\nuse docker-compose run instead of docker-compose up and tty: true.\nOverriding defaults via environment variables:\ne.g. in CI/cloud jobs, export TQDM_MININTERVAL=5 to avoid log spam.\nThis override logic is handled by the tqdm.utils.envwrap decorator\n(useful independent of tqdm).\n\nIf you come across any other difficulties, browse and file .\n\nDocumentation\n  (Since 19 May 2016)\nclass tqdm():\n  \"\"\"\n  Decorate an iterable object, returning an iterator which acts exactly\n  like the original iterable, but prints a dynamically updating\n  progressbar every time a value is requested.\n  \"\"\"\n\n  @envwrap(\"TQDM_\")  # override defaults via env vars\n  def __init__(self, iterable=None, desc=None, total=None, leave=True,\n               file=None, ncols=None, mininterval=0.1,\n               maxinterval=10.0, miniters=None, ascii=None, disable=False,\n               unit='it', unit_scale=False, dynamic_ncols=False,\n               smoothing=0.3, bar_format=None, initial=0, position=None,\n               postfix=None, unit_divisor=1000, write_bytes=False,\n               lock_args=None, nrows=None, colour=None, delay=0):\n\nParameters\n\n\niterable : iterable, optional\nIterable to decorate with a progressbar.\nLeave blank to manually manage the updates.\n\n\n\n\ndesc : str, optional\nPrefix for the progressbar.\n\n\n\n\ntotal : int or float, optional\nThe number of expected iterations. If unspecified,\nlen(iterable) is used if possible. If float(\"inf\") or as a last\nresort, only basic progress statistics are displayed\n(no ETA, no progressbar).\nIf gui is True and this parameter needs subsequent updating,\nspecify an initial arbitrary large positive number,\ne.g. 9e9.\n\n\n\n\nleave : bool, optional\nIf [default: True], keeps all traces of the progressbar\nupon termination of iteration.\nIf None, will leave only if position is 0.\n\n\n\n\nfile : io.TextIOWrapper or io.StringIO, optional\nSpecifies where to output the progress messages\n(default: sys.stderr). Uses file.write(str) and file.flush()\nmethods.  For encoding, see write_bytes.\n\n\n\n\nncols : int, optional\nThe width of the entire output message. If specified,\ndynamically resizes the progressbar to stay within this bound.\nIf unspecified, attempts to use environment width. The\nfallback is a meter width of 10 and no limit for the counter and\nstatistics. If 0, will not print any meter (only stats).\n\n\n\n\nmininterval : float, optional\nMinimum progress display update interval [default: 0.1] seconds.\n\n\n\n\nmaxinterval : float, optional\nMaximum progress display update interval [default: 10] seconds.\nAutomatically adjusts miniters to correspond to mininterval\nafter long display update lag. Only works if dynamic_miniters\nor monitor thread is enabled.\n\n\n\n\nminiters : int or float, optional\nMinimum progress display update interval, in iterations.\nIf 0 and dynamic_miniters, will automatically adjust to equal\nmininterval (more CPU efficient, good for tight loops).\nIf > 0, will skip display of specified number of iterations.\nTweak this and mininterval to get very efficient loops.\nIf your progress is erratic with both fast and slow iterations\n(network, skipping items, etc) you should set miniters=1.\n\n\n\n\nascii : bool or str, optional\nIf unspecified or False, use unicode (smooth blocks) to fill\nthe meter. The fallback is to use ASCII characters \" 123456789#\".\n\n\n\n\ndisable : bool, optional\nWhether to disable the entire progressbar wrapper\n[default: False]. If set to None, disable on non-TTY.\n\n\n\n\nunit : str, optional\nString that will be used to define the unit of each iteration\n[default: it].\n\n\n\n\nunit_scale : bool or int or float, optional\nIf 1 or True, the number of iterations will be reduced/scaled\nautomatically and a metric prefix following the\nInternational System of Units standard will be added\n(kilo, mega, etc.) [default: False]. If any other non-zero\nnumber, will scale total and n.\n\n\n\n\ndynamic_ncols : bool, optional\nIf set, constantly alters ncols and nrows to the\nenvironment (allowing for window resizes) [default: False].\n\n\n\n\nsmoothing : float, optional\nExponential moving average smoothing factor for speed estimates\n(ignored in GUI mode). Ranges from 0 (average speed) to 1\n(current/instantaneous speed) [default: 0.3].\n\n\n\n\nbar_format : str, optional\nSpecify a custom bar string formatting. May impact performance.\n[default: '{l_bar}{bar}{r_bar}'], where\nl_bar='{desc}: {percentage:3.0f}%|' and\nr_bar='| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, '\n'{rate_fmt}{postfix}]'\nPossible vars: l_bar, bar, r_bar, n, n_fmt, total, total_fmt,\npercentage, elapsed, elapsed_s, ncols, nrows, desc, unit,\nrate, rate_fmt, rate_noinv, rate_noinv_fmt,\nrate_inv, rate_inv_fmt, postfix, unit_divisor,\nremaining, remaining_s, eta.\nNote that a trailing \": \" is automatically removed after {desc}\nif the latter is empty.\n\n\n\n\ninitial : int or float, optional\nThe initial counter value. Useful when restarting a progress\nbar [default: 0]. If using float, consider specifying {n:.3f}\nor similar in bar_format, or specifying unit_scale.\n\n\n\n\nposition : int, optional\nSpecify the line offset to print this bar (starting from 0)\nAutomatic if unspecified.\nUseful to manage multiple bars at once (eg, from threads).\n\n\n\n\npostfix : dict or *, optional\nSpecify additional stats to display at the end of the bar.\nCalls set_postfix(**postfix) if possible (dict).\n\n\n\n\nunit_divisor : float, optional\n[default: 1000], ignored unless unit_scale is True.\n\n\n\n\nwrite_bytes : bool, optional\nWhether to write bytes. If (default: False) will write unicode.\n\n\n\n\nlock_args : tuple, optional\nPassed to refresh for intermediate output\n(initialisation, iterating, and updating).\n\n\n\n\nnrows : int, optional\nThe screen height. If specified, hides nested bars outside this\nbound. If unspecified, attempts to use environment height.\nThe fallback is 20.\n\n\n\n\ncolour : str, optional\nBar colour (e.g. 'green', '#00ff00').\n\n\n\n\ndelay : float, optional\nDon't display until [default: 0] seconds have elapsed.\n\n\n\n\n\nExtra CLI Options\n\n\ndelim : chr, optional\nDelimiting character [default: 'n']. Use '0' for null.\nN.B.: on Windows systems, Python converts 'n' to 'rn'.\n\n\n\nbuf_size : int, optional\nString buffer size in bytes [default: 256]\nused when delim is specified.\n\n\n\nbytes : bool, optional\nIf true, will count bytes, ignore delim, and default\nunit_scale to True, unit_divisor to 1024, and unit to 'B'.\n\n\n\ntee : bool, optional\nIf true, passes stdin to both stderr and stdout.\n\n\n\nupdate : bool, optional\nIf true, will treat input as newly elapsed iterations,\ni.e. numbers to pass to update(). Note that this is slow\n(~2e5 it/s) since every input must be decoded as a number.\n\n\n\nupdate_to : bool, optional\nIf true, will treat input as total elapsed iterations,\ni.e. numbers to assign to self.n. Note that this is slow\n(~2e5 it/s) since every input must be decoded as a number.\n\n\n\nnull : bool, optional\nIf true, will discard input (no stdout).\n\n\n\nmanpath : str, optional\nDirectory in which to install tqdm man pages.\n\n\n\ncomppath : str, optional\nDirectory in which to place tqdm completion.\n\n\n\nlog : str, optional\nCRITICAL|FATAL|ERROR|WARN(ING)|[default: 'INFO']|DEBUG|NOTSET.\n\n\n\n\nReturns\n\nout  : decorated iterator.\n\nclass tqdm():\n  def update(self, n=1):\n      \"\"\"\n      Manually update the progress bar, useful for streams\n      such as reading files.\n      E.g.:\n      >>> t = tqdm(total=filesize) # Initialise\n      >>> for current_buffer in stream:\n      ...    ...\n      ...    t.update(len(current_buffer))\n      >>> t.close()\n      The last line is highly recommended, but possibly not necessary if\n      ``t.update()`` will be called in such a way that ``filesize`` will be\n      exactly reached and printed.\n\n      Parameters\n      ----------\n      n  : int or float, optional\n          Increment to add to the internal counter of iterations\n          [default: 1]. If using float, consider specifying ``{n:.3f}``\n          or similar in ``bar_format``, or specifying ``unit_scale``.\n\n      Returns\n      -------\n      out  : bool or None\n          True if a ``display()`` was triggered.\n      \"\"\"\n\n  def close(self):\n      \"\"\"Cleanup and (if leave=False) close the progressbar.\"\"\"\n\n  def clear(self, nomove=False):\n      \"\"\"Clear current bar display.\"\"\"\n\n  def refresh(self):\n      \"\"\"\n      Force refresh the display of this bar.\n\n      Parameters\n      ----------\n      nolock  : bool, optional\n          If ``True``, does not lock.\n          If [default: ``False``]: calls ``acquire()`` on internal lock.\n      lock_args  : tuple, optional\n          Passed to internal lock's ``acquire()``.\n          If specified, will only ``display()`` if ``acquire()`` returns ``True``.\n      \"\"\"\n\n  def unpause(self):\n      \"\"\"Restart tqdm timer from last print time.\"\"\"\n\n  def reset(self, total=None):\n      \"\"\"\n      Resets to 0 iterations for repeated use.\n\n      Consider combining with ``leave=True``.\n\n      Parameters\n      ----------\n      total  : int or float, optional. Total to use for the new bar.\n      \"\"\"\n\n  def set_description(self, desc=None, refresh=True):\n      \"\"\"\n      Set/modify description of the progress bar.\n\n      Parameters\n      ----------\n      desc  : str, optional\n      refresh  : bool, optional\n          Forces refresh [default: True].\n      \"\"\"\n\n  def set_postfix(self, ordered_dict=None, refresh=True, **tqdm_kwargs):\n      \"\"\"\n      Set/modify postfix (additional stats)\n      with automatic formatting based on datatype.\n\n      Parameters\n      ----------\n      ordered_dict  : dict or OrderedDict, optional\n      refresh  : bool, optional\n          Forces refresh [default: True].\n      kwargs  : dict, optional\n      \"\"\"\n\n  @classmethod\n  def write(cls, s, file=sys.stdout, end=\"\\n\"):\n      \"\"\"Print a message via tqdm (without overlap with bars).\"\"\"\n\n  @property\n  def format_dict(self):\n      \"\"\"Public API for read-only member access.\"\"\"\n\n  def display(self, msg=None, pos=None):\n      \"\"\"\n      Use ``self.sp`` to display ``msg`` in the specified ``pos``.\n\n      Consider overloading this function when inheriting to use e.g.:\n      ``self.some_frontend(**self.format_dict)`` instead of ``self.sp``.\n\n      Parameters\n      ----------\n      msg  : str, optional. What to display (default: ``repr(self)``).\n      pos  : int, optional. Position to ``moveto``\n        (default: ``abs(self.pos)``).\n      \"\"\"\n\n  @classmethod\n  @contextmanager\n  def wrapattr(cls, stream, method, total=None, bytes=True, **tqdm_kwargs):\n      \"\"\"\n      stream  : file-like object.\n      method  : str, \"read\" or \"write\". The result of ``read()`` and\n          the first argument of ``write()`` should have a ``len()``.\n\n      >>> with tqdm.wrapattr(file_obj, \"read\", total=file_obj.size) as fobj:\n      ...     while True:\n      ...         chunk = fobj.read(chunk_size)\n      ...         if not chunk:\n      ...             break\n      \"\"\"\n\n  @classmethod\n  def pandas(cls, *targs, **tqdm_kwargs):\n      \"\"\"Registers the current `tqdm` class with `pandas`.\"\"\"\n\ndef trange(*args, **tqdm_kwargs):\n    \"\"\"Shortcut for `tqdm(range(*args), **tqdm_kwargs)`.\"\"\"\n\nConvenience Functions\ndef tqdm.contrib.tenumerate(iterable, start=0, total=None,\n                            tqdm_class=tqdm.auto.tqdm, **tqdm_kwargs):\n    \"\"\"Equivalent of `numpy.ndenumerate` or builtin `enumerate`.\"\"\"\n\ndef tqdm.contrib.tzip(iter1, *iter2plus, **tqdm_kwargs):\n    \"\"\"Equivalent of builtin `zip`.\"\"\"\n\ndef tqdm.contrib.tmap(function, *sequences, **tqdm_kwargs):\n    \"\"\"Equivalent of builtin `map`.\"\"\"\n\nSubmodules\nclass tqdm.notebook.tqdm(tqdm.tqdm):\n    \"\"\"IPython/Jupyter Notebook widget.\"\"\"\n\nclass tqdm.auto.tqdm(tqdm.tqdm):\n    \"\"\"Automatically chooses beween `tqdm.notebook` and `tqdm.tqdm`.\"\"\"\n\nclass tqdm.asyncio.tqdm(tqdm.tqdm):\n  \"\"\"Asynchronous version.\"\"\"\n  @classmethod\n  def as_completed(cls, fs, *, loop=None, timeout=None, total=None,\n                   **tqdm_kwargs):\n      \"\"\"Wrapper for `asyncio.as_completed`.\"\"\"\n\nclass tqdm.gui.tqdm(tqdm.tqdm):\n    \"\"\"Matplotlib GUI version.\"\"\"\n\nclass tqdm.tk.tqdm(tqdm.tqdm):\n    \"\"\"Tkinter GUI version.\"\"\"\n\nclass tqdm.rich.tqdm(tqdm.tqdm):\n    \"\"\"`rich.progress` version.\"\"\"\n\nclass tqdm.keras.TqdmCallback(keras.callbacks.Callback):\n    \"\"\"Keras callback for epoch and batch progress.\"\"\"\n\nclass tqdm.dask.TqdmCallback(dask.callbacks.Callback):\n    \"\"\"Dask callback for task progress.\"\"\"\n\ncontrib\nThe tqdm.contrib package also contains experimental modules:\n\ntqdm.contrib.itertools: Thin wrappers around itertools\ntqdm.contrib.concurrent: Thin wrappers around concurrent.futures\ntqdm.contrib.slack: Posts to Slack bots\ntqdm.contrib.discord: Posts to Discord bots\ntqdm.contrib.telegram: Posts to Telegram bots\ntqdm.contrib.bells: Automagically enables all optional features\nauto, pandas, slack, discord, telegram\n\n\n\n\nExamples and Advanced Usage\n\nSee the examples\nfolder;\nimport the module and run help();\nconsult the wiki;\nthis has an\nexcellent article\non how to make a great progressbar;\n\n\ncheck out the slides from PyData London, or\nrun the \n.\n\n\nDescription and additional stats\nCustom information can be displayed and updated dynamically on tqdm bars\nwith the desc and postfix arguments:\nfrom tqdm import tqdm, trange\nfrom random import random, randint\nfrom time import sleep\n\nwith trange(10) as t:\n    for i in t:\n        # Description will be displayed on the left\n        t.set_description('GEN %i' % i)\n        # Postfix will be displayed on the right,\n        # formatted automatically based on argument's datatype\n        t.set_postfix(loss=random(), gen=randint(1,999), str='h',\n                      lst=[1, 2])\n        sleep(0.1)\n\nwith tqdm(total=10, bar_format=\"{postfix[0]} {postfix[1][value]:>8.2g}\",\n          postfix=[\"Batch\", {\"value\": 0}]) as t:\n    for i in range(10):\n        sleep(0.1)\n        t.postfix[1][\"value\"] = i / 2\n        t.update()\nPoints to remember when using {postfix[...]} in the bar_format string:\n\npostfix also needs to be passed as an initial argument in a compatible\nformat, and\npostfix will be auto-converted to a string if it is a dict-like\nobject. To prevent this behaviour, insert an extra item into the dictionary\nwhere the key is not a string.\n\nAdditional bar_format parameters may also be defined by overriding\nformat_dict, and the bar itself may be modified using ascii:\nfrom tqdm import tqdm\nclass TqdmExtraFormat(tqdm):\n    \"\"\"Provides a `total_time` format parameter\"\"\"\n    @property\n    def format_dict(self):\n        d = super(TqdmExtraFormat, self).format_dict\n        total_time = d[\"elapsed\"] * (d[\"total\"] or 0) / max(d[\"n\"], 1)\n        d.update(total_time=self.format_interval(total_time) + \" in total\")\n        return d\n\nfor i in TqdmExtraFormat(\n      range(9), ascii=\" .oO0\",\n      bar_format=\"{total_time}: {percentage:.0f}%|{bar}{r_bar}\"):\n    if i == 4:\n        break\n00:00 in total: 44%|0000.     | 4/9 [00:00<00:00, 962.93it/s]\n\nNote that {bar} also supports a format specifier [width][type].\n\nwidth\nunspecified (default): automatic to fill ncols\nint >= 0: fixed width overriding ncols logic\nint < 0: subtract from the automatic default\n\n\ntype\na: ascii (ascii=True override)\nu: unicode (ascii=False override)\nb: blank (ascii=\"  \" override)\n\n\n\nThis means a fixed bar with right-justified text may be created by using:\nbar_format=\"{l_bar}{bar:10}|{bar:-10b}right-justified\"\n\nNested progress bars\ntqdm supports nested progress bars. Here's an example:\nfrom tqdm.auto import trange\nfrom time import sleep\n\nfor i in trange(4, desc='1st loop'):\n    for j in trange(5, desc='2nd loop'):\n        for k in trange(50, desc='3rd loop', leave=False):\n            sleep(0.01)\nFor manual control over positioning (e.g. for multi-processing use),\nyou may specify position=n where n=0 for the outermost bar,\nn=1 for the next, and so on.\nHowever, it's best to check if tqdm can work without manual position\nfirst.\nfrom time import sleep\nfrom tqdm import trange, tqdm\nfrom multiprocessing import Pool, RLock, freeze_support\n\nL = list(range(9))\n\ndef progresser(n):\n    interval = 0.001 / (n + 2)\n    total = 5000\n    text = \"#{}, est. {:<04.2}s\".format(n, interval * total)\n    for _ in trange(total, desc=text, position=n):\n        sleep(interval)\n\nif __name__ == '__main__':\n    freeze_support()  # for Windows support\n    tqdm.set_lock(RLock())  # for managing output contention\n    p = Pool(initializer=tqdm.set_lock, initargs=(tqdm.get_lock(),))\n    p.map(progresser, L)\nNote that in Python 3, tqdm.write is thread-safe:\nfrom time import sleep\nfrom tqdm import tqdm, trange\nfrom concurrent.futures import ThreadPoolExecutor\n\nL = list(range(9))\n\ndef progresser(n):\n    interval = 0.001 / (n + 2)\n    total = 5000\n    text = \"#{}, est. {:<04.2}s\".format(n, interval * total)\n    for _ in trange(total, desc=text):\n        sleep(interval)\n    if n == 6:\n        tqdm.write(\"n == 6 completed.\")\n        tqdm.write(\"`tqdm.write()` is thread-safe in py3!\")\n\nif __name__ == '__main__':\n    with ThreadPoolExecutor() as p:\n        p.map(progresser, L)\n\nHooks and callbacks\ntqdm can easily support callbacks/hooks and manual updates.\nHere's an example with urllib:\n``urllib.urlretrieve`` documentation\n\n\n[...]\nIf present, the hook function will be called once\non establishment of the network connection and once after each block read\nthereafter. The hook will be passed three arguments; a count of blocks\ntransferred so far, a block size in bytes, and the total size of the file.\n[...]\n\n\nimport urllib, os\nfrom tqdm import tqdm\nurllib = getattr(urllib, 'request', urllib)\n\nclass TqdmUpTo(tqdm):\n    \"\"\"Provides `update_to(n)` which uses `tqdm.update(delta_n)`.\"\"\"\n    def update_to(self, b=1, bsize=1, tsize=None):\n        \"\"\"\n        b  : int, optional\n            Number of blocks transferred so far [default: 1].\n        bsize  : int, optional\n            Size of each block (in tqdm units) [default: 1].\n        tsize  : int, optional\n            Total size (in tqdm units). If [default: None] remains unchanged.\n        \"\"\"\n        if tsize is not None:\n            self.total = tsize\n        return self.update(b * bsize - self.n)  # also sets self.n = b * bsize\n\neg_link = \"https://caspersci.uk.to/matryoshka.zip\"\nwith TqdmUpTo(unit='B', unit_scale=True, unit_divisor=1024, miniters=1,\n              desc=eg_link.split('/')[-1]) as t:  # all optional kwargs\n    urllib.urlretrieve(eg_link, filename=os.devnull,\n                       reporthook=t.update_to, data=None)\n    t.total = t.n\nInspired by twine#242.\nFunctional alternative in\nexamples/tqdm_wget.py.\nIt is recommend to use miniters=1 whenever there is potentially\nlarge differences in iteration speed (e.g. downloading a file over\na patchy connection).\nWrapping read/write methods\nTo measure throughput through a file-like object's read or write\nmethods, use CallbackIOWrapper:\nfrom tqdm.auto import tqdm\nfrom tqdm.utils import CallbackIOWrapper\n\nwith tqdm(total=file_obj.size,\n          unit='B', unit_scale=True, unit_divisor=1024) as t:\n    fobj = CallbackIOWrapper(t.update, file_obj, \"read\")\n    while True:\n        chunk = fobj.read(chunk_size)\n        if not chunk:\n            break\n    t.reset()\n    # ... continue to use `t` for something else\nAlternatively, use the even simpler wrapattr convenience function,\nwhich would condense both the urllib and CallbackIOWrapper examples\ndown to:\nimport urllib, os\nfrom tqdm import tqdm\n\neg_link = \"https://caspersci.uk.to/matryoshka.zip\"\nresponse = getattr(urllib, 'request', urllib).urlopen(eg_link)\nwith tqdm.wrapattr(open(os.devnull, \"wb\"), \"write\",\n                   miniters=1, desc=eg_link.split('/')[-1],\n                   total=getattr(response, 'length', None)) as fout:\n    for chunk in response:\n        fout.write(chunk)\nThe requests equivalent is nearly identical:\nimport requests, os\nfrom tqdm import tqdm\n\neg_link = \"https://caspersci.uk.to/matryoshka.zip\"\nresponse = requests.get(eg_link, stream=True)\nwith tqdm.wrapattr(open(os.devnull, \"wb\"), \"write\",\n                   miniters=1, desc=eg_link.split('/')[-1],\n                   total=int(response.headers.get('content-length', 0))) as fout:\n    for chunk in response.iter_content(chunk_size=4096):\n        fout.write(chunk)\nCustom callback\ntqdm is known for intelligently skipping unnecessary displays. To make a\ncustom callback take advantage of this, simply use the return value of\nupdate(). This is set to True if a display() was triggered.\nfrom tqdm.auto import tqdm as std_tqdm\n\ndef external_callback(*args, **kwargs):\n    ...\n\nclass TqdmExt(std_tqdm):\n    def update(self, n=1):\n        displayed = super(TqdmExt, self).update(n)\n        if displayed:\n            external_callback(**self.format_dict)\n        return displayed\n\nasyncio\nNote that break isn't currently caught by asynchronous iterators.\nThis means that tqdm cannot clean up after itself in this case:\nfrom tqdm.asyncio import tqdm\n\nasync for i in tqdm(range(9)):\n    if i == 2:\n        break\nInstead, either call pbar.close() manually or use the context manager syntax:\nfrom tqdm.asyncio import tqdm\n\nwith tqdm(range(9)) as pbar:\n    async for i in pbar:\n        if i == 2:\n            break\n\nPandas Integration\nDue to popular demand we've added support for pandas -- here's an example\nfor DataFrame.progress_apply and DataFrameGroupBy.progress_apply:\nimport pandas as pd\nimport numpy as np\nfrom tqdm import tqdm\n\ndf = pd.DataFrame(np.random.randint(0, 100, (100000, 6)))\n\n# Register `pandas.progress_apply` and `pandas.Series.map_apply` with `tqdm`\n# (can use `tqdm.gui.tqdm`, `tqdm.notebook.tqdm`, optional kwargs, etc.)\ntqdm.pandas(desc=\"my bar!\")\n\n# Now you can use `progress_apply` instead of `apply`\n# and `progress_map` instead of `map`\ndf.progress_apply(lambda x: x**2)\n# can also groupby:\n# df.groupby(0).progress_apply(lambda x: x**2)\nIn case you're interested in how this works (and how to modify it for your\nown callbacks), see the\nexamples\nfolder or import the module and run help().\n\nKeras Integration\nA keras callback is also available:\nfrom tqdm.keras import TqdmCallback\n\n...\n\nmodel.fit(..., verbose=0, callbacks=[TqdmCallback()])\n\nDask Integration\nA dask callback is also available:\nfrom tqdm.dask import TqdmCallback\n\nwith TqdmCallback(desc=\"compute\"):\n    ...\n    arr.compute()\n\n# or use callback globally\ncb = TqdmCallback(desc=\"global\")\ncb.register()\narr.compute()\n\nIPython/Jupyter Integration\nIPython/Jupyter is supported via the tqdm.notebook submodule:\nfrom tqdm.notebook import trange, tqdm\nfrom time import sleep\n\nfor i in trange(3, desc='1st loop'):\n    for j in tqdm(range(100), desc='2nd loop'):\n        sleep(0.01)\nIn addition to tqdm features, the submodule provides a native Jupyter\nwidget (compatible with IPython v1-v4 and Jupyter), fully working nested bars\nand colour hints (blue: normal, green: completed, red: error/interrupt,\nlight blue: no ETA); as demonstrated below.\n\n\n\nThe notebook version supports percentage or pixels for overall width\n(e.g.: ncols='100%' or ncols='480px').\nIt is also possible to let tqdm automatically choose between\nconsole or notebook versions by using the autonotebook submodule:\nfrom tqdm.autonotebook import tqdm\ntqdm.pandas()\nNote that this will issue a TqdmExperimentalWarning if run in a notebook\nsince it is not meant to be possible to distinguish between jupyter notebook\nand jupyter console. Use auto instead of autonotebook to suppress\nthis warning.\nNote that notebooks will display the bar in the cell where it was created.\nThis may be a different cell from the one where it is used.\nIf this is not desired, either\n\ndelay the creation of the bar to the cell where it must be displayed, or\ncreate the bar with display=False, and in a later cell call\ndisplay(bar.container):\n\nfrom tqdm.notebook import tqdm\npbar = tqdm(..., display=False)\n# different cell\ndisplay(pbar.container)\nThe keras callback has a display() method which can be used likewise:\nfrom tqdm.keras import TqdmCallback\ncbk = TqdmCallback(display=False)\n# different cell\ncbk.display()\nmodel.fit(..., verbose=0, callbacks=[cbk])\nAnother possibility is to have a single bar (near the top of the notebook)\nwhich is constantly re-used (using reset() rather than close()).\nFor this reason, the notebook version (unlike the CLI version) does not\nautomatically call close() upon Exception.\nfrom tqdm.notebook import tqdm\npbar = tqdm()\n# different cell\niterable = range(100)\npbar.reset(total=len(iterable))  # initialise with new `total`\nfor i in iterable:\n    pbar.update()\npbar.refresh()  # force print final status but don't `close()`\n\nCustom Integration\nTo change the default arguments (such as making dynamic_ncols=True),\nsimply use built-in Python magic:\nfrom functools import partial\nfrom tqdm import tqdm as std_tqdm\ntqdm = partial(std_tqdm, dynamic_ncols=True)\nFor further customisation,\ntqdm may be inherited from to create custom callbacks (as with the\nTqdmUpTo example above) or for custom frontends\n(e.g. GUIs such as notebook or plotting packages). In the latter case:\n\ndef __init__() to call super().__init__(..., gui=True) to disable\nterminal status_printer creation.\nRedefine: close(), clear(), display().\n\nConsider overloading display() to use e.g.\nself.frontend(**self.format_dict) instead of self.sp(repr(self)).\nSome submodule examples of inheritance:\n\ntqdm/notebook.py\ntqdm/gui.py\ntqdm/tk.py\ntqdm/contrib/slack.py\ntqdm/contrib/discord.py\ntqdm/contrib/telegram.py\n\n\nDynamic Monitor/Meter\nYou can use a tqdm as a meter which is not monotonically increasing.\nThis could be because n decreases (e.g. a CPU usage monitor) or total\nchanges.\nOne example would be recursively searching for files. The total is the\nnumber of objects found so far, while n is the number of those objects which\nare files (rather than folders):\nfrom tqdm import tqdm\nimport os.path\n\ndef find_files_recursively(path, show_progress=True):\n    files = []\n    # total=1 assumes `path` is a file\n    t = tqdm(total=1, unit=\"file\", disable=not show_progress)\n    if not os.path.exists(path):\n        raise IOError(\"Cannot find:\" + path)\n\n    def append_found_file(f):\n        files.append(f)\n        t.update()\n\n    def list_found_dir(path):\n        \"\"\"returns os.listdir(path) assuming os.path.isdir(path)\"\"\"\n        listing = os.listdir(path)\n        # subtract 1 since a \"file\" we found was actually this directory\n        t.total += len(listing) - 1\n        # fancy way to give info without forcing a refresh\n        t.set_postfix(dir=path[-10:], refresh=False)\n        t.update(0)  # may trigger a refresh\n        return listing\n\n    def recursively_search(path):\n        if os.path.isdir(path):\n            for f in list_found_dir(path):\n                recursively_search(os.path.join(path, f))\n        else:\n            append_found_file(path)\n\n    recursively_search(path)\n    t.set_postfix(dir=path)\n    t.close()\n    return files\nUsing update(0) is a handy way to let tqdm decide when to trigger a\ndisplay refresh to avoid console spamming.\n\nWriting messages\nThis is a work in progress (see\n#737).\nSince tqdm uses a simple printing mechanism to display progress bars,\nyou should not write any message in the terminal using print() while\na progressbar is open.\nTo write messages in the terminal without any collision with tqdm bar\ndisplay, a .write() method is provided:\nfrom tqdm.auto import tqdm, trange\nfrom time import sleep\n\nbar = trange(10)\nfor i in bar:\n    # Print using tqdm class method .write()\n    sleep(0.1)\n    if not (i % 3):\n        tqdm.write(\"Done task %i\" % i)\n    # Can also use bar.write()\nBy default, this will print to standard output sys.stdout. but you can\nspecify any file-like object using the file argument. For example, this\ncan be used to redirect the messages writing to a log file or class.\n\nRedirecting writing\nIf using a library that can print messages to the console, editing the library\nby  replacing print() with tqdm.write() may not be desirable.\nIn that case, redirecting sys.stdout to tqdm.write() is an option.\nTo redirect sys.stdout, create a file-like class that will write\nany input string to tqdm.write(), and supply the arguments\nfile=sys.stdout, dynamic_ncols=True.\nA reusable canonical example is given below:\nfrom time import sleep\nimport contextlib\nimport sys\nfrom tqdm import tqdm\nfrom tqdm.contrib import DummyTqdmFile\n\n\n@contextlib.contextmanager\ndef std_out_err_redirect_tqdm():\n    orig_out_err = sys.stdout, sys.stderr\n    try:\n        sys.stdout, sys.stderr = map(DummyTqdmFile, orig_out_err)\n        yield orig_out_err[0]\n    # Relay exceptions\n    except Exception as exc:\n        raise exc\n    # Always restore sys.stdout/err if necessary\n    finally:\n        sys.stdout, sys.stderr = orig_out_err\n\ndef some_fun(i):\n    print(\"Fee, fi, fo,\".split()[i])\n\n# Redirect stdout to tqdm.write() (don't forget the `as save_stdout`)\nwith std_out_err_redirect_tqdm() as orig_stdout:\n    # tqdm needs the original stdout\n    # and dynamic_ncols=True to autodetect console width\n    for i in tqdm(range(3), file=orig_stdout, dynamic_ncols=True):\n        sleep(.5)\n        some_fun(i)\n\n# After the `with`, printing is restored\nprint(\"Done!\")\n\nRedirecting logging\nSimilar to sys.stdout/sys.stderr as detailed above, console logging\nmay also be redirected to tqdm.write().\nWarning: if also redirecting sys.stdout/sys.stderr, make sure to\nredirect logging first if needed.\nHelper methods are available in tqdm.contrib.logging. For example:\nimport logging\nfrom tqdm import trange\nfrom tqdm.contrib.logging import logging_redirect_tqdm\n\nLOG = logging.getLogger(__name__)\n\nif __name__ == '__main__':\n    logging.basicConfig(level=logging.INFO)\n    with logging_redirect_tqdm():\n        for i in trange(9):\n            if i == 4:\n                LOG.info(\"console logging redirected to `tqdm.write()`\")\n    # logging restored\n\nMonitoring thread, intervals and miniters\ntqdm implements a few tricks to increase efficiency and reduce overhead.\n\nAvoid unnecessary frequent bar refreshing: mininterval defines how long\nto wait between each refresh. tqdm always gets updated in the background,\nbut it will display only every mininterval.\nReduce number of calls to check system clock/time.\nmininterval is more intuitive to configure than miniters.\nA clever adjustment system dynamic_miniters will automatically adjust\nminiters to the amount of iterations that fit into time mininterval.\nEssentially, tqdm will check if it's time to print without actually\nchecking time. This behaviour can be still be bypassed by manually setting\nminiters.\n\nHowever, consider a case with a combination of fast and slow iterations.\nAfter a few fast iterations, dynamic_miniters will set miniters to a\nlarge number. When iteration rate subsequently slows, miniters will\nremain large and thus reduce display update frequency. To address this:\n\nmaxinterval defines the maximum time between display refreshes.\nA concurrent monitoring thread checks for overdue updates and forces one\nwhere necessary.\n\nThe monitoring thread should not have a noticeable overhead, and guarantees\nupdates at least every 10 seconds by default.\nThis value can be directly changed by setting the monitor_interval of\nany tqdm instance (i.e. t = tqdm.tqdm(...); t.monitor_interval = 2).\nThe monitor thread may be disabled application-wide by setting\ntqdm.tqdm.monitor_interval = 0 before instantiation of any tqdm bar.\n\nMerch\nYou can buy tqdm branded merch now!\n\nContributions\n     \nAll source code is hosted on GitHub.\nContributions are welcome.\nSee the\nCONTRIBUTING\nfile for more information.\nDevelopers who have made significant contributions, ranked by SLoC\n(surviving lines of code,\ngit fame -wMC --excl '\\.(png|gif|jpg)$'),\nare:\n\n\nName\nID\nSLoC\nNotes\n\n\n\nCasper da Costa-Luis\ncasperdcl\n~80%\nprimary maintainer \n\nStephen Larroque\nlrq3000\n~9%\nteam member\n\nMartin Zugnoni\nmartinzugnoni\n~3%\n\u00a0\n\nDaniel Ecer\nde-code\n~2%\n\u00a0\n\nRichard Sheridan\nrichardsheridan\n~1%\n\u00a0\n\nGuangshuo Chen\nchengs\n~1%\n\u00a0\n\nHelio Machado\n0x2b3bfa0\n~1%\n\u00a0\n\nKyle Altendorf\naltendky\n<1%\n\u00a0\n\nNoam Yorav-Raphael\nnoamraph\n<1%\noriginal author\n\nMatthew Stevens\nmjstevens777\n<1%\n\u00a0\n\nHadrien Mary\nhadim\n<1%\nteam member\n\nMikhail Korobov\nkmike\n<1%\nteam member\n\n\n\n\nPorts to Other Languages\nA list is available on\nthis wiki page.\n\nLICENCE\nOpen Source (OSI approved): \n\nCitation information: \n\n (Since 19 May 2016)\n\n\n", "description": "Fast, extensible progress bar for loops and CLI."}, {"name": "tornado", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nTornado Web Server\nHello, world\nDocumentation\n\n\n\n\n\nREADME.rst\n\n\n\n\nTornado Web Server\n\n\nTornado is a Python web framework and\nasynchronous networking library, originally developed at FriendFeed.  By using non-blocking network I/O, Tornado\ncan scale to tens of thousands of open connections, making it ideal for\nlong polling,\nWebSockets, and other\napplications that require a long-lived connection to each user.\n\nHello, world\nHere is a simple \"Hello, world\" example web app for Tornado:\nimport asyncio\nimport tornado\n\nclass MainHandler(tornado.web.RequestHandler):\n    def get(self):\n        self.write(\"Hello, world\")\n\ndef make_app():\n    return tornado.web.Application([\n        (r\"/\", MainHandler),\n    ])\n\nasync def main():\n    app = make_app()\n    app.listen(8888)\n    await asyncio.Event().wait()\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\nThis example does not use any of Tornado's asynchronous features; for\nthat see this simple chat room.\n\nDocumentation\nDocumentation and links to additional resources are available at\nhttps://www.tornadoweb.org\n\n\n", "description": "Web framework and asynchronous networking library.", "category": "Web"}, {"name": "torchvision", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ntorchvision\nInstallation\nImage Backends\n[UNSTABLE] Video Backend\nUsing the models on C++\nTorchVision Operators\nDocumentation\nContributing\nDisclaimer on Datasets\nPre-trained Model License\nCiting TorchVision\n\n\n\n\n\nREADME.md\n\n\n\n\ntorchvision\n\n\nThe torchvision package consists of popular datasets, model architectures, and common image transformations for computer\nvision.\nInstallation\nPlease refer to the official\ninstructions to install the stable\nversions of torch and torchvision on your system.\nTo build source, refer to our contributing\npage.\nThe following is the corresponding torchvision versions and supported Python\nversions.\n\n\n\ntorch\ntorchvision\nPython\n\n\n\n\nmain / nightly\nmain / nightly\n>=3.8, <=3.11\n\n\n2.0\n0.15\n>=3.8, <=3.11\n\n\n1.13\n0.14\n>=3.7.2, <=3.10\n\n\n1.12\n0.13\n>=3.7, <=3.10\n\n\n\n\nolder versions\n\n\n\ntorch\ntorchvision\nPython\n\n\n\n\n1.11\n0.12\n>=3.7, <=3.10\n\n\n1.10\n0.11\n>=3.6, <=3.9\n\n\n1.9\n0.10\n>=3.6, <=3.9\n\n\n1.8\n0.9\n>=3.6, <=3.9\n\n\n1.7\n0.8\n>=3.6, <=3.9\n\n\n1.6\n0.7\n>=3.6, <=3.8\n\n\n1.5\n0.6\n>=3.5, <=3.8\n\n\n1.4\n0.5\n==2.7, >=3.5, <=3.8\n\n\n1.3\n0.4.2 / 0.4.3\n==2.7, >=3.5, <=3.7\n\n\n1.2\n0.4.1\n==2.7, >=3.5, <=3.7\n\n\n1.1\n0.3\n==2.7, >=3.5, <=3.7\n\n\n<=1.0\n0.2\n==2.7, >=3.5, <=3.7\n\n\n\n\nImage Backends\nTorchvision currently supports the following image backends:\n\ntorch tensors\nPIL images:\n\nPillow\nPillow-SIMD - a much faster drop-in replacement for Pillow with SIMD.\n\n\n\nRead more in in our docs.\n[UNSTABLE] Video Backend\nTorchvision currently supports the following video backends:\n\npyav (default) - Pythonic binding for ffmpeg libraries.\nvideo_reader - This needs ffmpeg to be installed and torchvision to be built from source. There shouldn't be any\nconflicting version of ffmpeg installed. Currently, this is only supported on Linux.\n\nconda install -c conda-forge ffmpeg\npython setup.py install\n\nUsing the models on C++\nTorchVision provides an example project for how to use the models on C++ using JIT Script.\nInstallation From source:\nmkdir build\ncd build\n# Add -DWITH_CUDA=on support for the CUDA if needed\ncmake ..\nmake\nmake install\n\nOnce installed, the library can be accessed in cmake (after properly configuring CMAKE_PREFIX_PATH) via the\nTorchVision::TorchVision target:\nfind_package(TorchVision REQUIRED)\ntarget_link_libraries(my-target PUBLIC TorchVision::TorchVision)\n\nThe TorchVision package will also automatically look for the Torch package and add it as a dependency to\nmy-target, so make sure that it is also available to cmake via the CMAKE_PREFIX_PATH.\nFor an example setup, take a look at examples/cpp/hello_world.\nPython linking is disabled by default when compiling TorchVision with CMake, this allows you to run models without any\nPython dependency. In some special cases where TorchVision's operators are used from Python code, you may need to link\nto Python. This can be done by passing -DUSE_PYTHON=on to CMake.\nTorchVision Operators\nIn order to get the torchvision operators registered with torch (eg. for the JIT), all you need to do is to ensure that\nyou #include <torchvision/vision.h> in your project.\nDocumentation\nYou can find the API documentation on the pytorch website: https://pytorch.org/vision/stable/index.html\nContributing\nSee the CONTRIBUTING file for how to help out.\nDisclaimer on Datasets\nThis is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets,\nvouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to\ndetermine whether you have permission to use the dataset under the dataset's license.\nIf you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset\nto be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML\ncommunity!\nPre-trained Model License\nThe pre-trained models provided in this library may have their own licenses or terms and conditions derived from the\ndataset used for training. It is your responsibility to determine whether you have permission to use the models for your\nuse case.\nMore specifically, SWAG models are released under the CC-BY-NC 4.0 license. See\nSWAG LICENSE for additional details.\nCiting TorchVision\nIf you find TorchVision useful in your work, please consider citing the following BibTeX entry:\n@software{torchvision2016,\n    title        = {TorchVision: PyTorch's Computer Vision library},\n    author       = {TorchVision maintainers and contributors},\n    year         = 2016,\n    journal      = {GitHub repository},\n    publisher    = {GitHub},\n    howpublished = {\\url{https://github.com/pytorch/vision}}\n}\n\n\n", "description": "Computer vision datasets, models, and image transformations for PyTorch."}, {"name": "torchtext", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ntorchtext\nInstallation\nOptional requirements\nBuilding from source\nDocumentation\nDatasets\nModels\nTokenizers\nTutorials\nDisclaimer on Datasets\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\n\n\ntorchtext\nThis repository consists of:\n\ntorchtext.datasets: The raw text iterators for common NLP datasets\ntorchtext.data: Some basic NLP building blocks\ntorchtext.transforms: Basic text-processing transformations\ntorchtext.models: Pre-trained models\ntorchtext.vocab: Vocab and Vectors related classes and factory functions\nexamples: Example NLP workflows with PyTorch and torchtext library.\n\n\nInstallation\nWe recommend Anaconda as a Python package management system. Please refer to pytorch.org for the details of PyTorch installation. The following are the corresponding torchtext versions and supported Python versions.\n\nVersion Compatibility\n\n\n\n\n\n\nPyTorch version\ntorchtext version\nSupported Python version\n\n\n\nnightly build\nmain\n>=3.8, <=3.11\n\n1.14.0\n0.15.0\n>=3.8, <=3.11\n\n1.13.0\n0.14.0\n>=3.7, <=3.10\n\n1.12.0\n0.13.0\n>=3.7, <=3.10\n\n1.11.0\n0.12.0\n>=3.6, <=3.9\n\n1.10.0\n0.11.0\n>=3.6, <=3.9\n\n1.9.1\n0.10.1\n>=3.6, <=3.9\n\n1.9\n0.10\n>=3.6, <=3.9\n\n1.8.1\n0.9.1\n>=3.6, <=3.9\n\n1.8\n0.9\n>=3.6, <=3.9\n\n1.7.1\n0.8.1\n>=3.6, <=3.9\n\n1.7\n0.8\n>=3.6, <=3.8\n\n1.6\n0.7\n>=3.6, <=3.8\n\n1.5\n0.6\n>=3.5, <=3.8\n\n1.4\n0.5\n2.7, >=3.5, <=3.8\n\n0.4 and below\n0.2.3\n2.7, >=3.5, <=3.8\n\n\n\nUsing conda:\nconda install -c pytorch torchtext\n\nUsing pip:\npip install torchtext\n\n\nOptional requirements\nIf you want to use English tokenizer from SpaCy, you need to install SpaCy and download its English model:\npip install spacy\npython -m spacy download en_core_web_sm\n\nAlternatively, you might want to use the Moses tokenizer port in SacreMoses (split from NLTK). You have to install SacreMoses:\npip install sacremoses\n\nFor torchtext 0.5 and below, sentencepiece:\nconda install -c powerai sentencepiece\n\n\nBuilding from source\nTo build torchtext from source, you need git, CMake and C++11 compiler such as g++.:\ngit clone https://github.com/pytorch/text torchtext\ncd torchtext\ngit submodule update --init --recursive\n\n# Linux\npython setup.py clean install\n\n# OSX\nCC=clang CXX=clang++ python setup.py clean install\n\n# or ``python setup.py develop`` if you are making modifications.\n\nNote\nWhen building from source, make sure that you have the same C++ compiler as the one used to build PyTorch. A simple way is to build PyTorch from source and use the same environment to build torchtext.\nIf you are using the nightly build of PyTorch, checkout the environment it was built with conda (here) and pip (here).\nAdditionally, datasets in torchtext are implemented using the torchdata library. Please take a look at the\ninstallation instructions to download the latest nightlies or install from source.\n\nDocumentation\nFind the documentation here.\n\nDatasets\nThe datasets module currently contains:\n\nLanguage modeling: WikiText2, WikiText103, PennTreebank, EnWik9\nMachine translation: IWSLT2016, IWSLT2017, Multi30k\nSequence tagging (e.g. POS/NER): UDPOS, CoNLL2000Chunking\nQuestion answering: SQuAD1, SQuAD2\nText classification: SST2, AG_NEWS, SogouNews, DBpedia, YelpReviewPolarity, YelpReviewFull, YahooAnswers, AmazonReviewPolarity, AmazonReviewFull, IMDB\nModel pre-training: CC-100\n\n\nModels\nThe library currently consist of following pre-trained models:\n\nRoBERTa: Base and Large Architecture\nDistilRoBERTa\nXLM-RoBERTa: Base and Large Architure\nT5: Small, Base, Large, 3B, and 11B Architecture\nFlan-T5: Base, Large, XL, and XXL Architecture\n\n\nTokenizers\nThe transforms module currently support following scriptable tokenizers:\n\nSentencePiece\nGPT-2 BPE\nCLIP\nRE2\nBERT\n\n\nTutorials\nTo get started with torchtext, users may refer to the following tutorial available on PyTorch website.\n\nSST-2 binary text classification using XLM-R pre-trained model\nText classification with AG_NEWS dataset\nTranslation trained with Multi30k dataset using transformers and torchtext\nLanguage modeling using transforms and torchtext\n\n\nDisclaimer on Datasets\nThis is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.\nIf you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!\n\n\n", "description": "Text processing datasets, data loaders, and models for PyTorch."}, {"name": "torchaudio", "readme": "\n\n\n\n\n\n\n\n\n\n\n\ntorchaudio: an audio library for PyTorch\nInstallation\nAPI Reference\nContributing Guidelines\nCitation\nDisclaimer on Datasets\nPre-trained Model License\n\n\n\n\n\nREADME.md\n\n\n\n\ntorchaudio: an audio library for PyTorch\n\n\n\n\nThe aim of torchaudio is to apply PyTorch to\nthe audio domain. By supporting PyTorch, torchaudio follows the same philosophy\nof providing strong GPU acceleration, having a focus on trainable features through\nthe autograd system, and having consistent style (tensor names and dimension names).\nTherefore, it is primarily a machine learning library and not a general signal\nprocessing library. The benefits of PyTorch can be seen in torchaudio through\nhaving all the computations be through PyTorch operations which makes it easy\nto use and feel like a natural extension.\n\nSupport audio I/O (Load files, Save files)\n\nLoad a variety of audio formats, such as wav, mp3, ogg, flac, opus, sphere, into a torch Tensor using SoX\nKaldi (ark/scp)\n\n\nDataloaders for common audio datasets\nAudio and speech processing functions\n\nforced_align\n\n\nCommon audio transforms\n\nSpectrogram, AmplitudeToDB, MelScale, MelSpectrogram, MFCC, MuLawEncoding, MuLawDecoding, Resample\n\n\nCompliance interfaces: Run code using PyTorch that align with other libraries\n\nKaldi: spectrogram, fbank, mfcc\n\n\n\nInstallation\nPlease refer to https://pytorch.org/audio/main/installation.html for installation and build process of TorchAudio.\nAPI Reference\nAPI Reference is located here: http://pytorch.org/audio/main/\nContributing Guidelines\nPlease refer to CONTRIBUTING.md\nCitation\nIf you find this package useful, please cite as:\n@article{yang2021torchaudio,\n  title={TorchAudio: Building Blocks for Audio and Speech Processing},\n  author={Yao-Yuan Yang and Moto Hira and Zhaoheng Ni and Anjali Chourdia and Artyom Astafurov and Caroline Chen and Ching-Feng Yeh and Christian Puhrsch and David Pollack and Dmitriy Genzel and Donny Greenberg and Edward Z. Yang and Jason Lian and Jay Mahadeokar and Jeff Hwang and Ji Chen and Peter Goldsborough and Prabhat Roy and Sean Narenthiran and Shinji Watanabe and Soumith Chintala and Vincent Quenneville-B\u00e9lair and Yangyang Shi},\n  journal={arXiv preprint arXiv:2110.15018},\n  year={2021}\n}\nDisclaimer on Datasets\nThis is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.\nIf you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!\nPre-trained Model License\nThe pre-trained models provided in this library may have their own licenses or terms and conditions derived from the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.\nFor instance, SquimSubjective model is released under the Creative Commons Attribution Non Commercial 4.0 International (CC-BY-NC 4.0) license. See the link for additional details.\nOther pre-trained models that have different license are noted in documentation. Please checkout the documentation page.\n\n\n", "description": "Audio processing dataset and models for PyTorch."}, {"name": "torch", "readme": "\n\n\nPyTorch is a Python package that provides two high-level features:\n\nTensor computation (like NumPy) with strong GPU acceleration\nDeep neural networks built on a tape-based autograd system\n\nYou can reuse your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.\nOur trunk health (Continuous Integration signals) can be found at hud.pytorch.org.\n\nMore About PyTorch\n\nA GPU-Ready Tensor Library\nDynamic Neural Networks: Tape-Based Autograd\nPython First\nImperative Experiences\nFast and Lean\nExtensions Without Pain\n\n\nInstallation\n\nBinaries\n\nNVIDIA Jetson Platforms\n\n\nFrom Source\n\nPrerequisites\nInstall Dependencies\nGet the PyTorch Source\nInstall PyTorch\n\nAdjust Build Options (Optional)\n\n\n\n\nDocker Image\n\nUsing pre-built images\nBuilding the image yourself\n\n\nBuilding the Documentation\nPrevious Versions\n\n\nGetting Started\nResources\nCommunication\nReleases and Contributing\nThe Team\nLicense\n\nMore About PyTorch\nAt a granular level, PyTorch is a library that consists of the following components:\n\n\n\nComponent\nDescription\n\n\n\n\ntorch\nA Tensor library like NumPy, with strong GPU support\n\n\ntorch.autograd\nA tape-based automatic differentiation library that supports all differentiable Tensor operations in torch\n\n\ntorch.jit\nA compilation stack (TorchScript) to create serializable and optimizable models from PyTorch code\n\n\ntorch.nn\nA neural networks library deeply integrated with autograd designed for maximum flexibility\n\n\ntorch.multiprocessing\nPython multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training\n\n\ntorch.utils\nDataLoader and other utility functions for convenience\n\n\n\nUsually, PyTorch is used either as:\n\nA replacement for NumPy to use the power of GPUs.\nA deep learning research platform that provides maximum flexibility and speed.\n\nElaborating Further:\nA GPU-Ready Tensor Library\nIf you use NumPy, then you have used Tensors (a.k.a. ndarray).\n\nPyTorch provides Tensors that can live either on the CPU or the GPU and accelerates the\ncomputation by a huge amount.\nWe provide a wide variety of tensor routines to accelerate and fit your scientific computation needs\nsuch as slicing, indexing, mathematical operations, linear algebra, reductions.\nAnd they are fast!\nDynamic Neural Networks: Tape-Based Autograd\nPyTorch has a unique way of building neural networks: using and replaying a tape recorder.\nMost frameworks such as TensorFlow, Theano, Caffe, and CNTK have a static view of the world.\nOne has to build a neural network and reuse the same structure again and again.\nChanging the way the network behaves means that one has to start from scratch.\nWith PyTorch, we use a technique called reverse-mode auto-differentiation, which allows you to\nchange the way your network behaves arbitrarily with zero lag or overhead. Our inspiration comes\nfrom several research papers on this topic, as well as current and past work such as\ntorch-autograd,\nautograd,\nChainer, etc.\nWhile this technique is not unique to PyTorch, it's one of the fastest implementations of it to date.\nYou get the best of speed and flexibility for your crazy research.\n\nPython First\nPyTorch is not a Python binding into a monolithic C++ framework.\nIt is built to be deeply integrated into Python.\nYou can use it naturally like you would use NumPy / SciPy / scikit-learn etc.\nYou can write your new neural network layers in Python itself, using your favorite libraries\nand use packages such as Cython and Numba.\nOur goal is to not reinvent the wheel where appropriate.\nImperative Experiences\nPyTorch is designed to be intuitive, linear in thought, and easy to use.\nWhen you execute a line of code, it gets executed. There isn't an asynchronous view of the world.\nWhen you drop into a debugger or receive error messages and stack traces, understanding them is straightforward.\nThe stack trace points to exactly where your code was defined.\nWe hope you never spend hours debugging your code because of bad stack traces or asynchronous and opaque execution engines.\nFast and Lean\nPyTorch has minimal framework overhead. We integrate acceleration libraries\nsuch as Intel MKL and NVIDIA (cuDNN, NCCL) to maximize speed.\nAt the core, its CPU and GPU Tensor and neural network backends\nare mature and have been tested for years.\nHence, PyTorch is quite fast \u2013 whether you run small or large neural networks.\nThe memory usage in PyTorch is extremely efficient compared to Torch or some of the alternatives.\nWe've written custom memory allocators for the GPU to make sure that\nyour deep learning models are maximally memory efficient.\nThis enables you to train bigger deep learning models than before.\nExtensions Without Pain\nWriting new neural network modules, or interfacing with PyTorch's Tensor API was designed to be straightforward\nand with minimal abstractions.\nYou can write new neural network layers in Python using the torch API\nor your favorite NumPy-based libraries such as SciPy.\nIf you want to write your layers in C/C++, we provide a convenient extension API that is efficient and with minimal boilerplate.\nNo wrapper code needs to be written. You can see a tutorial here and an example here.\nInstallation\nBinaries\nCommands to install binaries via Conda or pip wheels are on our website: https://pytorch.org/get-started/locally/\nNVIDIA Jetson Platforms\nPython wheels for NVIDIA's Jetson Nano, Jetson TX1/TX2, Jetson Xavier NX/AGX, and Jetson AGX Orin are provided here and the L4T container is published here\nThey require JetPack 4.2 and above, and @dusty-nv and @ptrblck are maintaining them.\nFrom Source\nPrerequisites\nIf you are installing from source, you will need:\n\nPython 3.8 or later (for Linux, Python 3.8.1+ is needed)\nA C++17 compatible compiler, such as clang\n\nWe highly recommend installing an Anaconda environment. You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your Linux distro.\nIf you want to compile with CUDA support, install the following (note that CUDA is not supported on macOS)\n\nNVIDIA CUDA 11.0 or above\nNVIDIA cuDNN v7 or above\nCompiler compatible with CUDA\n\nNote: You could refer to the cuDNN Support Matrix for cuDNN versions with the various supported CUDA, CUDA driver and NVIDIA hardware\nIf you want to disable CUDA support, export the environment variable USE_CUDA=0.\nOther potentially useful environment variables may be found in setup.py.\nIf you are building for NVIDIA's Jetson platforms (Jetson Nano, TX1, TX2, AGX Xavier), Instructions to install PyTorch for Jetson Nano are available here\nIf you want to compile with ROCm support, install\n\nAMD ROCm 4.0 and above installation\nROCm is currently supported only for Linux systems.\n\nIf you want to disable ROCm support, export the environment variable USE_ROCM=0.\nOther potentially useful environment variables may be found in setup.py.\nInstall Dependencies\nCommon\nconda install cmake ninja\n# Run this command from the PyTorch directory after cloning the source code using the \u201cGet the PyTorch Source\u201c section below\npip install -r requirements.txt\n\nOn Linux\nconda install mkl mkl-include\n# CUDA only: Add LAPACK support for the GPU if needed\nconda install -c pytorch magma-cuda110  # or the magma-cuda* that matches your CUDA version from https://anaconda.org/pytorch/repo\n\nOn MacOS\n# Add this package on intel x86 processor machines only\nconda install mkl mkl-include\n# Add these packages if torch.distributed is needed\nconda install pkg-config libuv\n\nOn Windows\nconda install mkl mkl-include\n# Add these packages if torch.distributed is needed.\n# Distributed package support on Windows is a prototype feature and is subject to changes.\nconda install -c conda-forge libuv=1.39\n\nGet the PyTorch Source\ngit clone --recursive https://github.com/pytorch/pytorch\ncd pytorch\n# if you are updating an existing checkout\ngit submodule sync\ngit submodule update --init --recursive\n\nInstall PyTorch\nOn Linux\nIf you're compiling for AMD ROCm then first run this command:\n# Only run this if you're compiling for ROCm\npython tools/amd_build/build_amd.py\n\nInstall PyTorch\nexport CMAKE_PREFIX_PATH=${CONDA_PREFIX:-\"$(dirname $(which conda))/../\"}\npython setup.py develop\n\n\nAside: If you are using Anaconda, you may experience an error caused by the linker:\nbuild/temp.linux-x86_64-3.7/torch/csrc/stub.o: file not recognized: file format not recognized\ncollect2: error: ld returned 1 exit status\nerror: command 'g++' failed with exit status 1\n\nThis is caused by ld from the Conda environment shadowing the system ld. You should use a newer version of Python that fixes this issue. The recommended Python version is 3.8.1+.\n\nOn macOS\npython3 setup.py develop\n\nOn Windows\nChoose Correct Visual Studio Version.\nPyTorch CI uses Visual C++ BuildTools, which come with Visual Studio Enterprise,\nProfessional, or Community Editions. You can also install the build tools from\nhttps://visualstudio.microsoft.com/visual-cpp-build-tools/. The build tools do not\ncome with Visual Studio Code by default.\nIf you want to build legacy python code, please refer to Building on legacy code and CUDA\nCPU-only builds\nIn this mode PyTorch computations will run on your CPU, not your GPU\nconda activate\npython setup.py develop\n\nNote on OpenMP: The desired OpenMP implementation is Intel OpenMP (iomp). In order to link against iomp, you'll need to manually download the library and set up the building environment by tweaking CMAKE_INCLUDE_PATH and LIB. The instruction here is an example for setting up both MKL and Intel OpenMP. Without these configurations for CMake, Microsoft Visual C OpenMP runtime (vcomp) will be used.\nCUDA based build\nIn this mode PyTorch computations will leverage your GPU via CUDA for faster number crunching\nNVTX is needed to build Pytorch with CUDA.\nNVTX is a part of CUDA distributive, where it is called \"Nsight Compute\". To install it onto an already installed CUDA run CUDA installation once again and check the corresponding checkbox.\nMake sure that CUDA with Nsight Compute is installed after Visual Studio.\nCurrently, VS 2017 / 2019, and Ninja are supported as the generator of CMake. If ninja.exe is detected in PATH, then Ninja will be used as the default generator, otherwise, it will use VS 2017 / 2019.\n If Ninja is selected as the generator, the latest MSVC will get selected as the underlying toolchain.\nAdditional libraries such as\nMagma, oneDNN, a.k.a MKLDNN or DNNL, and Sccache are often needed. Please refer to the installation-helper to install them.\nYou can refer to the build_pytorch.bat script for some other environment variables configurations\ncmd\n\n:: Set the environment variables after you have downloaded and unzipped the mkl package,\n:: else CMake would throw an error as `Could NOT find OpenMP`.\nset CMAKE_INCLUDE_PATH={Your directory}\\mkl\\include\nset LIB={Your directory}\\mkl\\lib;%LIB%\n\n:: Read the content in the previous section carefully before you proceed.\n:: [Optional] If you want to override the underlying toolset used by Ninja and Visual Studio with CUDA, please run the following script block.\n:: \"Visual Studio 2019 Developer Command Prompt\" will be run automatically.\n:: Make sure you have CMake >= 3.12 before you do this when you use the Visual Studio generator.\nset CMAKE_GENERATOR_TOOLSET_VERSION=14.27\nset DISTUTILS_USE_SDK=1\nfor /f \"usebackq tokens=*\" %i in (`\"%ProgramFiles(x86)%\\Microsoft Visual Studio\\Installer\\vswhere.exe\" -version [15^,17^) -products * -latest -property installationPath`) do call \"%i\\VC\\Auxiliary\\Build\\vcvarsall.bat\" x64 -vcvars_ver=%CMAKE_GENERATOR_TOOLSET_VERSION%\n\n:: [Optional] If you want to override the CUDA host compiler\nset CUDAHOSTCXX=C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX64\\x64\\cl.exe\n\npython setup.py develop\n\nAdjust Build Options (Optional)\nYou can adjust the configuration of cmake variables optionally (without building first), by doing\nthe following. For example, adjusting the pre-detected directories for CuDNN or BLAS can be done\nwith such a step.\nOn Linux\nexport CMAKE_PREFIX_PATH=${CONDA_PREFIX:-\"$(dirname $(which conda))/../\"}\npython setup.py build --cmake-only\nccmake build  # or cmake-gui build\n\nOn macOS\nexport CMAKE_PREFIX_PATH=${CONDA_PREFIX:-\"$(dirname $(which conda))/../\"}\nMACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py build --cmake-only\nccmake build  # or cmake-gui build\n\nDocker Image\nUsing pre-built images\nYou can also pull a pre-built docker image from Docker Hub and run with docker v19.03+\ndocker run --gpus all --rm -ti --ipc=host pytorch/pytorch:latest\n\nPlease note that PyTorch uses shared memory to share data between processes, so if torch multiprocessing is used (e.g.\nfor multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you\nshould increase shared memory size either with --ipc=host or --shm-size command line options to nvidia-docker run.\nBuilding the image yourself\nNOTE: Must be built with a docker version > 18.06\nThe Dockerfile is supplied to build images with CUDA 11.1 support and cuDNN v8.\nYou can pass PYTHON_VERSION=x.y make variable to specify which Python version is to be used by Miniconda, or leave it\nunset to use the default.\nmake -f docker.Makefile\n# images are tagged as docker.io/${your_docker_username}/pytorch\n\nBuilding the Documentation\nTo build documentation in various formats, you will need Sphinx and the\nreadthedocs theme.\ncd docs/\npip install -r requirements.txt\n\nYou can then build the documentation by running make <format> from the\ndocs/ folder. Run make to get a list of all available output formats.\nIf you get a katex error run npm install katex.  If it persists, try\nnpm install -g katex\n\nNote: if you installed nodejs with a different package manager (e.g.,\nconda) then npm will probably install a version of katex that is not\ncompatible with your version of nodejs and doc builds will fail.\nA combination of versions that is known to work is node@6.13.1 and\nkatex@0.13.18. To install the latter with npm you can run\nnpm install -g katex@0.13.18\n\nPrevious Versions\nInstallation instructions and binaries for previous PyTorch versions may be found\non our website.\nGetting Started\nThree-pointers to get you started:\n\nTutorials: get you started with understanding and using PyTorch\nExamples: easy to understand PyTorch code across all domains\nThe API Reference\nGlossary\n\nResources\n\nPyTorch.org\nPyTorch Tutorials\nPyTorch Examples\nPyTorch Models\nIntro to Deep Learning with PyTorch from Udacity\nIntro to Machine Learning with PyTorch from Udacity\nDeep Neural Networks with PyTorch from Coursera\nPyTorch Twitter\nPyTorch Blog\nPyTorch YouTube\n\nCommunication\n\nForums: Discuss implementations, research, etc. https://discuss.pytorch.org\nGitHub Issues: Bug reports, feature requests, install issues, RFCs, thoughts, etc.\nSlack: The PyTorch Slack hosts a primary audience of moderate to experienced PyTorch users and developers for general chat, online discussions, collaboration, etc. If you are a beginner looking for help, the primary medium is PyTorch Forums. If you need a slack invite, please fill this form: https://goo.gl/forms/PP1AGvNHpSaJP8to1\nNewsletter: No-noise, a one-way email newsletter with important announcements about PyTorch. You can sign-up here: https://eepurl.com/cbG0rv\nFacebook Page: Important announcements about PyTorch. https://www.facebook.com/pytorch\nFor brand guidelines, please visit our website at pytorch.org\n\nReleases and Contributing\nPyTorch has a 90-day release cycle (major releases). Please let us know if you encounter a bug by filing an issue.\nWe appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.\nIf you plan to contribute new features, utility functions, or extensions to the core, please first open an issue and discuss the feature with us.\nSending a PR without discussion might end up resulting in a rejected PR because we might be taking the core in a different direction than you might be aware of.\nTo learn more about making a contribution to Pytorch, please see our Contribution page.\nThe Team\nPyTorch is a community-driven project with several skillful engineers and researchers contributing to it.\nPyTorch is currently maintained by Adam Paszke, Sam Gross, Soumith Chintala and Gregory Chanan with major contributions coming from hundreds of talented individuals in various forms and means.\nA non-exhaustive but growing list needs to mention: Trevor Killeen, Sasank Chilamkurthy, Sergey Zagoruyko, Adam Lerer, Francisco Massa, Alykhan Tejani, Luca Antiga, Alban Desmaison, Andreas Koepf, James Bradbury, Zeming Lin, Yuandong Tian, Guillaume Lample, Marat Dukhan, Natalia Gimelshein, Christian Sarofeen, Martin Raison, Edward Yang, Zachary Devito.\nNote: This project is unrelated to hughperkins/pytorch with the same name. Hugh is a valuable contributor to the Torch community and has helped with many things Torch and PyTorch.\nLicense\nPyTorch has a BSD-style license, as found in the LICENSE file.\n", "description": "GPU accelerated tensor library with autograd support."}, {"name": "toolz", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nToolz\nLICENSE\nInstall\nStructure and Heritage\nExample\nDependencies\nCyToolz\nSee Also\nContributions Welcome\nCommunity\n\n\n\n\n\nREADME.rst\n\n\n\n\nToolz\n\n  \n\nA set of utility functions for iterators, functions, and dictionaries.\nSee the PyToolz documentation at https://toolz.readthedocs.io\n\nLICENSE\nNew BSD. See License File.\n\nInstall\ntoolz is on the Python Package Index (PyPI):\npip install toolz\n\n\nStructure and Heritage\ntoolz is implemented in three parts:\nitertoolz, for operations on iterables. Examples: groupby,\nunique, interpose,\nfunctoolz, for higher-order functions. Examples: memoize,\ncurry, compose,\ndicttoolz, for operations on dictionaries. Examples: assoc,\nupdate-in, merge.\nThese functions come from the legacy of functional languages for list\nprocessing. They interoperate well to accomplish common complex tasks.\nRead our API\nDocumentation for\nmore details.\n\nExample\nThis builds a standard wordcount function from pieces within toolz:\n>>> def stem(word):\n...     \"\"\" Stem word to primitive form \"\"\"\n...     return word.lower().rstrip(\",.!:;'-\\\"\").lstrip(\"'\\\"\")\n\n>>> from toolz import compose, frequencies\n>>> from toolz.curried import map\n>>> wordcount = compose(frequencies, map(stem), str.split)\n\n>>> sentence = \"This cat jumped over this other cat!\"\n>>> wordcount(sentence)\n{'this': 2, 'cat': 2, 'jumped': 1, 'over': 1, 'other': 1}\n\nDependencies\ntoolz supports Python 3.5+ with a common codebase.\nIt is pure Python and requires no dependencies beyond the standard\nlibrary.\nIt is, in short, a lightweight dependency.\n\nCyToolz\nThe toolz project has been reimplemented in Cython.\nThe cytoolz project is a drop-in replacement for the Pure Python\nimplementation.\nSee CyToolz GitHub Page for more\ndetails.\n\nSee Also\n\nUnderscore.js: A similar library for\nJavaScript\nEnumerable: A\nsimilar library for Ruby\nClojure: A functional language whose\nstandard library has several counterparts in toolz\nitertools: The\nPython standard library for iterator tools\nfunctools: The\nPython standard library for function tools\n\n\nContributions Welcome\ntoolz aims to be a repository for utility functions, particularly\nthose that come from the functional programming and list processing\ntraditions. We welcome contributions that fall within this scope.\nWe also try to keep the API small to keep toolz manageable.  The ideal\ncontribution is significantly different from existing functions and has\nprecedent in a few other functional systems.\nPlease take a look at our\nissue page\nfor contribution ideas.\n\nCommunity\nSee our mailing list.\nWe're friendly.\n\n\n", "description": "Functional standard library for iterators, functions, and dictionaries."}, {"name": "tomli", "readme": "\n\n\n\nTomli\n\nA lil' TOML parser\n\nTable of Contents generated with mdformat-toc\n\nIntro\nInstallation\nUsage\n\nParse a TOML string\nParse a TOML file\nHandle invalid TOML\nConstruct decimal.Decimals from TOML floats\n\n\nFAQ\n\nWhy this parser?\nIs comment preserving round-trip parsing supported?\nIs there a dumps, write or encode function?\nHow do TOML types map into Python types?\n\n\nPerformance\n\nIntro\nTomli is a Python library for parsing TOML.\nTomli is fully compatible with TOML v1.0.0.\nInstallation\npip install tomli\n\nUsage\nParse a TOML string\nimport tomli\n\ntoml_str = \"\"\"\n           gretzky = 99\n\n           [kurri]\n           jari = 17\n           \"\"\"\n\ntoml_dict = tomli.loads(toml_str)\nassert toml_dict == {\"gretzky\": 99, \"kurri\": {\"jari\": 17}}\n\nParse a TOML file\nimport tomli\n\nwith open(\"path_to_file/conf.toml\", \"rb\") as f:\n    toml_dict = tomli.load(f)\n\nThe file must be opened in binary mode (with the \"rb\" flag).\nBinary mode will enforce decoding the file as UTF-8 with universal newlines disabled,\nboth of which are required to correctly parse TOML.\nHandle invalid TOML\nimport tomli\n\ntry:\n    toml_dict = tomli.loads(\"]] this is invalid TOML [[\")\nexcept tomli.TOMLDecodeError:\n    print(\"Yep, definitely not valid.\")\n\nNote that error messages are considered informational only.\nThey should not be assumed to stay constant across Tomli versions.\nConstruct decimal.Decimals from TOML floats\nfrom decimal import Decimal\nimport tomli\n\ntoml_dict = tomli.loads(\"precision-matters = 0.982492\", parse_float=Decimal)\nassert toml_dict[\"precision-matters\"] == Decimal(\"0.982492\")\n\nNote that decimal.Decimal can be replaced with another callable that converts a TOML float from string to a Python type.\nThe decimal.Decimal is, however, a practical choice for use cases where float inaccuracies can not be tolerated.\nIllegal types are dict and list, and their subtypes.\nA ValueError will be raised if parse_float produces illegal types.\nFAQ\nWhy this parser?\n\nit's lil'\npure Python with zero dependencies\nthe fastest pure Python parser *:\n15x as fast as tomlkit,\n2.4x as fast as toml\noutputs basic data types only\n100% spec compliant: passes all tests in\na test set\nsoon to be merged to the official\ncompliance tests for TOML\nrepository\nthoroughly tested: 100% branch coverage\n\nIs comment preserving round-trip parsing supported?\nNo.\nThe tomli.loads function returns a plain dict that is populated with builtin types and types from the standard library only.\nPreserving comments requires a custom type to be returned so will not be supported,\nat least not by the tomli.loads and tomli.load functions.\nLook into TOML Kit if preservation of style is what you need.\nIs there a dumps, write or encode function?\nTomli-W is the write-only counterpart of Tomli, providing dump and dumps functions.\nThe core library does not include write capability, as most TOML use cases are read-only, and Tomli intends to be minimal.\nHow do TOML types map into Python types?\n\n\n\nTOML type\nPython type\nDetails\n\n\n\n\nDocument Root\ndict\n\n\n\nKey\nstr\n\n\n\nString\nstr\n\n\n\nInteger\nint\n\n\n\nFloat\nfloat\n\n\n\nBoolean\nbool\n\n\n\nOffset Date-Time\ndatetime.datetime\ntzinfo attribute set to an instance of datetime.timezone\n\n\nLocal Date-Time\ndatetime.datetime\ntzinfo attribute set to None\n\n\nLocal Date\ndatetime.date\n\n\n\nLocal Time\ndatetime.time\n\n\n\nArray\nlist\n\n\n\nTable\ndict\n\n\n\nInline Table\ndict\n\n\n\n\nPerformance\nThe benchmark/ folder in this repository contains a performance benchmark for comparing the various Python TOML parsers.\nThe benchmark can be run with tox -e benchmark-pypi.\nRunning the benchmark on my personal computer output the following:\nfoo@bar:~/dev/tomli$ tox -e benchmark-pypi\nbenchmark-pypi installed: attrs==19.3.0,click==7.1.2,pytomlpp==1.0.2,qtoml==0.3.0,rtoml==0.7.0,toml==0.10.2,tomli==1.1.0,tomlkit==0.7.2\nbenchmark-pypi run-test-pre: PYTHONHASHSEED='2658546909'\nbenchmark-pypi run-test: commands[0] | python -c 'import datetime; print(datetime.date.today())'\n2021-07-23\nbenchmark-pypi run-test: commands[1] | python --version\nPython 3.8.10\nbenchmark-pypi run-test: commands[2] | python benchmark/run.py\nParsing data.toml 5000 times:\n------------------------------------------------------\n    parser |  exec time | performance (more is better)\n-----------+------------+-----------------------------\n     rtoml |    0.901 s | baseline (100%)\n  pytomlpp |     1.08 s | 83.15%\n     tomli |     3.89 s | 23.15%\n      toml |     9.36 s | 9.63%\n     qtoml |     11.5 s | 7.82%\n   tomlkit |     56.8 s | 1.59%\n\nThe parsers are ordered from fastest to slowest, using the fastest parser as baseline.\nTomli performed the best out of all pure Python TOML parsers,\nlosing only to pytomlpp (wraps C++) and rtoml (wraps Rust).\n", "description": "TOML parser for Python."}, {"name": "toml", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nTOML\nInstallation\nQuick Tutorial\nNote\nAPI Reference\nLicensing\n\n\n\n\n\nREADME.rst\n\n\n\n\nTOML\n\n\n\n\nA Python library for parsing and creating TOML.\nThe module passes the TOML test suite.\nSee also:\n\nThe TOML Standard\nThe currently supported TOML specification\n\n\nInstallation\nTo install the latest release on PyPI,\nsimply run:\npip install toml\n\nOr to install the latest development version, run:\ngit clone https://github.com/uiri/toml.git\ncd toml\npython setup.py install\n\n\nQuick Tutorial\ntoml.loads takes in a string containing standard TOML-formatted data and\nreturns a dictionary containing the parsed data.\n>>> import toml\n>>> toml_string = \"\"\"\n... # This is a TOML document.\n...\n... title = \"TOML Example\"\n...\n... [owner]\n... name = \"Tom Preston-Werner\"\n... dob = 1979-05-27T07:32:00-08:00 # First class dates\n...\n... [database]\n... server = \"192.168.1.1\"\n... ports = [ 8001, 8001, 8002 ]\n... connection_max = 5000\n... enabled = true\n...\n... [servers]\n...\n...   # Indentation (tabs and/or spaces) is allowed but not required\n...   [servers.alpha]\n...   ip = \"10.0.0.1\"\n...   dc = \"eqdc10\"\n...\n...   [servers.beta]\n...   ip = \"10.0.0.2\"\n...   dc = \"eqdc10\"\n...\n... [clients]\n... data = [ [\"gamma\", \"delta\"], [1, 2] ]\n...\n... # Line breaks are OK when inside arrays\n... hosts = [\n...   \"alpha\",\n...   \"omega\"\n... ]\n... \"\"\"\n>>> parsed_toml = toml.loads(toml_string)\ntoml.dumps takes a dictionary and returns a string containing the\ncorresponding TOML-formatted data.\n>>> new_toml_string = toml.dumps(parsed_toml)\n>>> print(new_toml_string)\ntitle = \"TOML Example\"\n[owner]\nname = \"Tom Preston-Werner\"\ndob = 1979-05-27T07:32:00Z\n[database]\nserver = \"192.168.1.1\"\nports = [ 8001, 8001, 8002,]\nconnection_max = 5000\nenabled = true\n[clients]\ndata = [ [ \"gamma\", \"delta\",], [ 1, 2,],]\nhosts = [ \"alpha\", \"omega\",]\n[servers.alpha]\nip = \"10.0.0.1\"\ndc = \"eqdc10\"\n[servers.beta]\nip = \"10.0.0.2\"\ndc = \"eqdc10\"\ntoml.dump takes a dictionary and a file descriptor and returns a string containing the\ncorresponding TOML-formatted data.\n>>> with open('new_toml_file.toml', 'w') as f:\n...     new_toml_string = toml.dump(parsed_toml, f)\n>>> print(new_toml_string)\ntitle = \"TOML Example\"\n[owner]\nname = \"Tom Preston-Werner\"\ndob = 1979-05-27T07:32:00Z\n[database]\nserver = \"192.168.1.1\"\nports = [ 8001, 8001, 8002,]\nconnection_max = 5000\nenabled = true\n[clients]\ndata = [ [ \"gamma\", \"delta\",], [ 1, 2,],]\nhosts = [ \"alpha\", \"omega\",]\n[servers.alpha]\nip = \"10.0.0.1\"\ndc = \"eqdc10\"\n[servers.beta]\nip = \"10.0.0.2\"\ndc = \"eqdc10\"\nFor more functions, view the API Reference below.\n\nNote\nFor Numpy users, by default the data types np.floatX will not be translated to floats by toml, but will instead be encoded as strings. To get around this, specify the TomlNumpyEncoder when saving your data.\n>>> import toml\n>>> import numpy as np\n>>> a = np.arange(0, 10, dtype=np.double)\n>>> output = {'a': a}\n>>> toml.dumps(output)\n'a = [ \"0.0\", \"1.0\", \"2.0\", \"3.0\", \"4.0\", \"5.0\", \"6.0\", \"7.0\", \"8.0\", \"9.0\",]\\n'\n>>> toml.dumps(output, encoder=toml.TomlNumpyEncoder())\n'a = [ 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0,]\\n'\n\nAPI Reference\n\ntoml.load(f, _dict=dict)\nParse a file or a list of files as TOML and return a dictionary.\n\n\nArgs:\nf: A path to a file, list of filepaths (to be read into single\nobject) or a file descriptor\n_dict: The class of the dictionary object to be returned\n\n\n\nReturns:A dictionary (or object _dict) containing parsed TOML data\n\n\nRaises:\nTypeError: When f is an invalid type or is a list containing\ninvalid types\nTomlDecodeError: When an error occurs while decoding the file(s)\n\n\n\n\n\n\ntoml.loads(s, _dict=dict)\nParse a TOML-formatted string to a dictionary.\n\n\nArgs:\ns: The TOML-formatted string to be parsed\n_dict: Specifies the class of the returned toml dictionary\n\n\n\nReturns:A dictionary (or object _dict) containing parsed TOML data\n\n\nRaises:\nTypeError: When a non-string object is passed\nTomlDecodeError: When an error occurs while decoding the\nTOML-formatted string\n\n\n\n\n\n\ntoml.dump(o, f, encoder=None)\nWrite a dictionary to a file containing TOML-formatted data\n\n\nArgs:\no: An object to be converted into TOML\nf: A File descriptor where the TOML-formatted output should be stored\nencoder: An instance of TomlEncoder (or subclass) for encoding the object. If None, will default to TomlEncoder\n\n\n\nReturns:A string containing the TOML-formatted data corresponding to object o\n\n\nRaises:\nTypeError: When anything other than file descriptor is passed\n\n\n\n\n\n\ntoml.dumps(o, encoder=None)\nCreate a TOML-formatted string from an input object\n\n\nArgs:\no: An object to be converted into TOML\nencoder: An instance of TomlEncoder (or subclass) for encoding the object. If None, will default to TomlEncoder\n\n\n\nReturns:A string containing the TOML-formatted data corresponding to object o\n\n\n\n\n\n\n\nLicensing\nThis project is released under the terms of the MIT Open Source License. View\nLICENSE.txt for more information.\n\n\n", "description": "TOML parser and encoder for Python."}, {"name": "tinycss2", "readme": "\ntinycss2 is a low-level CSS parser and generator written in Python: it can\nparse strings, return objects representing tokens and blocks, and generate CSS\nstrings corresponding to these objects.\nBased on the CSS Syntax Level 3 specification, tinycss2 knows the grammar of\nCSS but doesn\u2019t know specific rules, properties or values supported in various\nCSS modules.\n\nFree software: BSD license\nFor Python 3.7+, tested on CPython and PyPy\nDocumentation: https://doc.courtbouillon.org/tinycss2\nChangelog: https://github.com/Kozea/tinycss2/releases\nCode, issues, tests: https://github.com/Kozea/tinycss2\nCode of conduct: https://www.courtbouillon.org/code-of-conduct\nProfessional support: https://www.courtbouillon.org\nDonation: https://opencollective.com/courtbouillon\n\ntinycss2 has been created and developed by Kozea (https://kozea.fr).\nProfessional support, maintenance and community management is provided by\nCourtBouillon (https://www.courtbouillon.org).\nCopyrights are retained by their contributors, no copyright assignment is\nrequired to contribute to tinycss2. Unless explicitly stated otherwise, any\ncontribution intentionally submitted for inclusion is licensed under the BSD\n3-clause license, without any additional terms or conditions. For full\nauthorship information, see the version control history.\n", "description": "Low-level CSS parser and generator."}, {"name": "tifffile", "readme": "\nTifffile is a Python library to\n\nstore NumPy arrays in TIFF (Tagged Image File Format) files, and\nread image and metadata from TIFF-like files used in bioimaging.\n\nImage and metadata can be read from TIFF, BigTIFF, OME-TIFF, DNG, STK, LSM,\nSGI, NIHImage, ImageJ, MMStack, NDTiff, FluoView, ScanImage, SEQ, GEL,\nSVS, SCN, SIS, BIF, ZIF (Zoomable Image File Format), QPTIFF (QPI, PKI), NDPI,\nand GeoTIFF formatted files.\nImage data can be read as NumPy arrays or Zarr arrays/groups from strips,\ntiles, pages (IFDs), SubIFDs, higher order series, and pyramidal levels.\nImage data can be written to TIFF, BigTIFF, OME-TIFF, and ImageJ hyperstack\ncompatible files in multi-page, volumetric, pyramidal, memory-mappable,\ntiled, predicted, or compressed form.\nMany compression and predictor schemes are supported via the imagecodecs\nlibrary, including LZW, PackBits, Deflate, PIXTIFF, LZMA, LERC, Zstd,\nJPEG (8 and 12-bit, lossless), JPEG 2000, JPEG XR, JPEG XL, WebP, PNG, EER,\nJetraw, 24-bit floating-point, and horizontal differencing.\nTifffile can also be used to inspect TIFF structures, read image data from\nmulti-dimensional file sequences, write fsspec ReferenceFileSystem for\nTIFF files and image file sequences, patch TIFF tag values, and parse\nmany proprietary metadata formats.\n\nAuthor:\nChristoph Gohlke\n\nLicense:\nBSD 3-Clause\n\nVersion:\n2023.8.30\n\nDOI:\n10.5281/zenodo.6795860\n\n\n\nQuickstart\nInstall the tifffile package and all dependencies from the\nPython Package Index:\npython -m pip install -U tifffile[all]\nTifffile is also available in other package repositories such as Anaconda,\nDebian, and MSYS2.\nThe tifffile library is type annotated and documented via docstrings:\npython -c \"import tifffile; help(tifffile)\"\nTifffile can be used as a console script to inspect and preview TIFF files:\npython -m tifffile --help\nSee Examples for using the programming interface.\nSource code and support are available on\nGitHub.\nSupport is also provided on the\nimage.sc forum.\n\n\nRequirements\nThis revision was tested with the following requirements and dependencies\n(other versions may work):\n\nCPython 3.9.13, 3.10.11, 3.11.5, 3.12rc, 64-bit\nNumPy 1.25.2\nImagecodecs 2023.8.12\n(required for encoding or decoding LZW, JPEG, etc. compressed segments)\nMatplotlib 3.7.2\n(required for plotting)\nLxml 4.9.3\n(required only for validating and printing XML)\nZarr 2.16.1\n(required only for opening Zarr stores)\nFsspec 2023.6.0\n(required only for opening ReferenceFileSystem files)\n\n\n\nRevisions\n2023.8.30\n\nPass 5007 tests.\nSupport exclusive file creation mode (#221, #223).\n\n2023.8.25\n\nVerify shaped metadata is compatible with page shape.\nSupport out parameter when returning selection from imread (#222).\n\n2023.8.12\n\nSupport decompressing EER frames.\nFacilitate filtering logged warnings (#216).\nRead more tags from UIC1Tag (#217).\nFix premature closing of files in main (#218).\nDon\u2019t force matplotlib backend to tkagg in main (#219).\nAdd py.typed marker.\nDrop support for imagecodecs < 2023.3.16.\n\n2023.7.18\n\nLimit threading via TIFFFILE_NUM_THREADS environment variable (#215).\nRemove maxworkers parameter from tiff2fsspec (breaking).\n\n2023.7.10\n\nIncrease default strip size to 256 KB when writing with compression.\nFix ZarrTiffStore with non-default chunkmode.\n\n2023.7.4\n\nAdd option to return selection from imread (#200).\nFix reading OME series with missing trailing frames (#199).\nFix fsspec reference for WebP compressed segments missing alpha channel.\nFix linting issues.\nDetect files written by Agilent Technologies.\nDrop support for Python 3.8 and numpy < 1.21 (NEP29).\n\n2023.4.12\n\nDo not write duplicate ImageDescription tags from extratags (breaking).\nSupport multifocal SVS files (#193).\nLog warning when filtering out extratags.\nFix writing OME-TIFF with image description in extratags.\nIgnore invalid predictor tag value if prediction is not used.\nRaise KeyError if ZarrStore is missing requested chunk.\n\n2023.3.21\n\nFix reading MMstack with missing data (#187).\n\n2023.3.15\n\nFix corruption using tile generators with prediction/compression (#185).\nAdd parser for Micro-Manager MMStack series (breaking).\nReturn micromanager_metadata IndexMap as numpy array (breaking).\nRevert optimizations for Micro-Manager OME series.\nDo not use numcodecs zstd in write_fsspec (kerchunk issue 317).\nMore type annotations.\n\n2023.2.28\n\nFix reading some Micro-Manager metadata from corrupted files.\nSpeed up reading Micro-Manager indexmap for creation of OME series.\n\n2023.2.27\n\nUse Micro-Manager indexmap offsets to create virtual TiffFrames.\nFixes for future imagecodecs.\n\n2023.2.3\n\nFix overflow in calculation of databytecounts for large NDPI files.\n\n2023.2.2\n\nFix regression reading layered NDPI files.\nAdd option to specify offset in FileHandle.read_array.\n\n2023.1.23\n\nSupport reading NDTiffStorage.\nSupport reading PIXTIFF compression.\nSupport LERC with Zstd or Deflate compression.\nDo not write duplicate and select extratags.\nAllow to write uncompressed image data beyond 4 GB in classic TIFF.\nAdd option to specify chunkshape and dtype in FileSequence.asarray.\nAdd option for imread to write to output in FileSequence.asarray (#172).\nAdd function to read GDAL structural metadata.\nAdd function to read NDTiff.index files.\nFix IndexError accessing TiffFile.mdgel_metadata in non-MDGEL files.\nFix unclosed file ResourceWarning in TiffWriter.\nFix non-bool predictor arguments (#167).\nRelax detection of OME-XML (#173).\nRename some TiffFrame parameters (breaking).\nDeprecate squeeze_axes (will change signature).\nUse defusexml in xml2dict.\n\n2022.10.10\n\n\u2026\n\nRefer to the CHANGES file for older revisions.\n\n\nNotes\nTIFF, the Tagged Image File Format, was created by the Aldus Corporation and\nAdobe Systems Incorporated. STK, LSM, FluoView, SGI, SEQ, GEL, QPTIFF, NDPI,\nSCN, SVS, ZIF, BIF, and OME-TIFF, are custom extensions defined by Molecular\nDevices (Universal Imaging Corporation), Carl Zeiss MicroImaging, Olympus,\nSilicon Graphics International, Media Cybernetics, Molecular Dynamics,\nPerkinElmer, Hamamatsu, Leica, ObjectivePathology, Roche Digital Pathology,\nand the Open Microscopy Environment consortium, respectively.\nTifffile supports a subset of the TIFF6 specification, mainly 8, 16, 32, and\n64-bit integer, 16, 32 and 64-bit float, grayscale and multi-sample images.\nSpecifically, CCITT and OJPEG compression, chroma subsampling without JPEG\ncompression, color space transformations, samples with differing types, or\nIPTC, ICC, and XMP metadata are not implemented.\nBesides classic TIFF, tifffile supports several TIFF-like formats that do not\nstrictly adhere to the TIFF6 specification. Some formats allow file and data\nsizes to exceed the 4 GB limit of the classic TIFF:\n\nBigTIFF is identified by version number 43 and uses different file\nheader, IFD, and tag structures with 64-bit offsets. The format also adds\n64-bit data types. Tifffile can read and write BigTIFF files.\nImageJ hyperstacks store all image data, which may exceed 4 GB,\ncontiguously after the first IFD. Files > 4 GB contain one IFD only.\nThe size and shape of the up to 6-dimensional image data can be determined\nfrom the ImageDescription tag of the first IFD, which is Latin-1 encoded.\nTifffile can read and write ImageJ hyperstacks.\nOME-TIFF files store up to 8-dimensional image data in one or multiple\nTIFF or BigTIFF files. The UTF-8 encoded OME-XML metadata found in the\nImageDescription tag of the first IFD defines the position of TIFF IFDs in\nthe high dimensional image data. Tifffile can read OME-TIFF files (except\nmulti-file pyramidal) and write NumPy arrays to single-file OME-TIFF.\nMicro-Manager NDTiff stores multi-dimensional image data in one\nor more classic TIFF files. Metadata contained in a separate NDTiff.index\nbinary file defines the position of the TIFF IFDs in the image array.\nEach TIFF file also contains metadata in a non-TIFF binary structure at\noffset 8. Downsampled image data of pyramidal datasets are stored in\nseparate folders. Tifffile can read NDTiff files. Version 0 and 1 series,\ntiling, stitching, and multi-resolution pyramids are not supported.\nMicro-Manager MMStack stores 6-dimensional image data in one or more\nclassic TIFF files. Metadata contained in non-TIFF binary structures and\nJSON strings define the image stack dimensions and the position of the image\nframe data in the file and the image stack. The TIFF structures and metadata\nare often corrupted or wrong. Tifffile can read MMStack files.\nCarl Zeiss LSM files store all IFDs below 4 GB and wrap around 32-bit\nStripOffsets pointing to image data above 4 GB. The StripOffsets of each\nseries and position require separate unwrapping. The StripByteCounts tag\ncontains the number of bytes for the uncompressed data. Tifffile can read\nLSM files of any size.\nMetaMorph Stack, STK files contain additional image planes stored\ncontiguously after the image data of the first page. The total number of\nplanes is equal to the count of the UIC2tag. Tifffile can read STK files.\nZIF, the Zoomable Image File format, is a subspecification of BigTIFF\nwith SGI\u2019s ImageDepth extension and additional compression schemes.\nOnly little-endian, tiled, interleaved, 8-bit per sample images with\nJPEG, PNG, JPEG XR, and JPEG 2000 compression are allowed. Tifffile can\nread and write ZIF files.\nHamamatsu NDPI files use some 64-bit offsets in the file header, IFD,\nand tag structures. Single, LONG typed tag values can exceed 32-bit.\nThe high bytes of 64-bit tag values and offsets are stored after IFD\nstructures. Tifffile can read NDPI files > 4 GB.\nJPEG compressed segments with dimensions >65530 or missing restart markers\ncannot be decoded with common JPEG libraries. Tifffile works around this\nlimitation by separately decoding the MCUs between restart markers, which\nperforms poorly. BitsPerSample, SamplesPerPixel, and\nPhotometricInterpretation tags may contain wrong values, which can be\ncorrected using the value of tag 65441.\nPhilips TIFF slides store wrong ImageWidth and ImageLength tag values\nfor tiled pages. The values can be corrected using the DICOM_PIXEL_SPACING\nattributes of the XML formatted description of the first page. Tifffile can\nread Philips slides.\nVentana/Roche BIF slides store tiles and metadata in a BigTIFF container.\nTiles may overlap and require stitching based on the TileJointInfo elements\nin the XMP tag. Volumetric scans are stored using the ImageDepth extension.\nTifffile can read BIF and decode individual tiles but does not perform\nstitching.\nScanImage optionally allows corrupted non-BigTIFF files > 2 GB.\nThe values of StripOffsets and StripByteCounts can be recovered using the\nconstant differences of the offsets of IFD and tag values throughout the\nfile. Tifffile can read such files if the image data are stored contiguously\nin each page.\nGeoTIFF sparse files allow strip or tile offsets and byte counts to be 0.\nSuch segments are implicitly set to 0 or the NODATA value on reading.\nTifffile can read GeoTIFF sparse files.\nTifffile shaped files store the array shape and user-provided metadata\nof multi-dimensional image series in JSON format in the ImageDescription tag\nof the first page of the series. The format allows for multiple series,\nSubIFDs, sparse segments with zero offset and byte count, and truncated\nseries, where only the first page of a series is present, and the image data\nare stored contiguously. No other software besides Tifffile supports the\ntruncated format.\n\nOther libraries for reading, writing, inspecting, or manipulating scientific\nTIFF files from Python are\naicsimageio,\napeer-ometiff-library,\nbigtiff,\nfabio.TiffIO,\nGDAL,\nimread,\nlarge_image,\nopenslide-python,\nopentile,\npylibtiff,\npylsm,\npymimage,\npython-bioformats,\npytiff,\nscanimagetiffreader-python,\nSimpleITK,\nslideio,\ntiffslide,\ntifftools,\ntyf,\nxtiff, and\nndtiff.\n\n\nReferences\n\nTIFF 6.0 Specification and Supplements. Adobe Systems Incorporated.\nhttps://www.adobe.io/open/standards/TIFF.html\nTIFF File Format FAQ. https://www.awaresystems.be/imaging/tiff/faq.html\nThe BigTIFF File Format.\nhttps://www.awaresystems.be/imaging/tiff/bigtiff.html\nMetaMorph Stack (STK) Image File Format.\nhttp://mdc.custhelp.com/app/answers/detail/a_id/18862\nImage File Format Description LSM 5/7 Release 6.0 (ZEN 2010).\nCarl Zeiss MicroImaging GmbH. BioSciences. May 10, 2011\nThe OME-TIFF format.\nhttps://docs.openmicroscopy.org/ome-model/latest/\nUltraQuant(r) Version 6.0 for Windows Start-Up Guide.\nhttp://www.ultralum.com/images%20ultralum/pdf/UQStart%20Up%20Guide.pdf\nMicro-Manager File Formats.\nhttps://micro-manager.org/wiki/Micro-Manager_File_Formats\nScanImage BigTiff Specification.\nhttps://docs.scanimage.org/Appendix/ScanImage+BigTiff+Specification.html\nZIF, the Zoomable Image File format. https://zif.photo/\nGeoTIFF File Format https://gdal.org/drivers/raster/gtiff.html\nCloud optimized GeoTIFF.\nhttps://github.com/cogeotiff/cog-spec/blob/master/spec.md\nTags for TIFF and Related Specifications. Digital Preservation.\nhttps://www.loc.gov/preservation/digital/formats/content/tiff_tags.shtml\nCIPA DC-008-2016: Exchangeable image file format for digital still cameras:\nExif Version 2.31.\nhttp://www.cipa.jp/std/documents/e/DC-008-Translation-2016-E.pdf\nThe EER (Electron Event Representation) file format.\nhttps://github.com/fei-company/EerReaderLib\nDigital Negative (DNG) Specification. Version 1.5.0.0, June 2012.\nhttps://www.adobe.com/content/dam/acom/en/products/photoshop/pdfs/\ndng_spec_1.5.0.0.pdf\nRoche Digital Pathology. BIF image file format for digital pathology.\nhttps://diagnostics.roche.com/content/dam/diagnostics/Blueprint/en/pdf/rmd/\nRoche-Digital-Pathology-BIF-Whitepaper.pdf\nAstro-TIFF specification. https://astro-tiff.sourceforge.io/\nAperio Technologies, Inc. Digital Slides and Third-Party Data Interchange.\nAperio_Digital_Slides_and_Third-party_data_interchange.pdf\nPerkinElmer image format.\nhttps://downloads.openmicroscopy.org/images/Vectra-QPTIFF/perkinelmer/\nPKI_Image%20Format.docx\nNDTiffStorage. https://github.com/micro-manager/NDTiffStorage\n\n\n\nExamples\nWrite a NumPy array to a single-page RGB TIFF file:\n>>> data = numpy.random.randint(0, 255, (256, 256, 3), 'uint8')\n>>> imwrite('temp.tif', data, photometric='rgb')\n\nRead the image from the TIFF file as NumPy array:\n>>> image = imread('temp.tif')\n>>> image.shape\n(256, 256, 3)\n\nUse the photometric and planarconfig arguments to write a 3x3x3 NumPy\narray to an interleaved RGB, a planar RGB, or a 3-page grayscale TIFF:\n>>> data = numpy.random.randint(0, 255, (3, 3, 3), 'uint8')\n>>> imwrite('temp.tif', data, photometric='rgb')\n>>> imwrite('temp.tif', data, photometric='rgb', planarconfig='separate')\n>>> imwrite('temp.tif', data, photometric='minisblack')\n\nUse the extrasamples argument to specify how extra components are\ninterpreted, for example, for an RGBA image with unassociated alpha channel:\n>>> data = numpy.random.randint(0, 255, (256, 256, 4), 'uint8')\n>>> imwrite('temp.tif', data, photometric='rgb', extrasamples=['unassalpha'])\n\nWrite a 3-dimensional NumPy array to a multi-page, 16-bit grayscale TIFF file:\n>>> data = numpy.random.randint(0, 2**12, (64, 301, 219), 'uint16')\n>>> imwrite('temp.tif', data, photometric='minisblack')\n\nRead the whole image stack from the multi-page TIFF file as NumPy array:\n>>> image_stack = imread('temp.tif')\n>>> image_stack.shape\n(64, 301, 219)\n>>> image_stack.dtype\ndtype('uint16')\n\nRead the image from the first page in the TIFF file as NumPy array:\n>>> image = imread('temp.tif', key=0)\n>>> image.shape\n(301, 219)\n\nRead images from a selected range of pages:\n>>> images = imread('temp.tif', key=range(4, 40, 2))\n>>> images.shape\n(18, 301, 219)\n\nIterate over all pages in the TIFF file and successively read images:\n>>> with TiffFile('temp.tif') as tif:\n...     for page in tif.pages:\n...         image = page.asarray()\n\nGet information about the image stack in the TIFF file without reading\nany image data:\n>>> tif = TiffFile('temp.tif')\n>>> len(tif.pages)  # number of pages in the file\n64\n>>> page = tif.pages[0]  # get shape and dtype of image in first page\n>>> page.shape\n(301, 219)\n>>> page.dtype\ndtype('uint16')\n>>> page.axes\n'YX'\n>>> series = tif.series[0]  # get shape and dtype of first image series\n>>> series.shape\n(64, 301, 219)\n>>> series.dtype\ndtype('uint16')\n>>> series.axes\n'QYX'\n>>> tif.close()\n\nInspect the \u201cXResolution\u201d tag from the first page in the TIFF file:\n>>> with TiffFile('temp.tif') as tif:\n...     tag = tif.pages[0].tags['XResolution']\n>>> tag.value\n(1, 1)\n>>> tag.name\n'XResolution'\n>>> tag.code\n282\n>>> tag.count\n1\n>>> tag.dtype\n<DATATYPE.RATIONAL: 5>\n\nIterate over all tags in the TIFF file:\n>>> with TiffFile('temp.tif') as tif:\n...     for page in tif.pages:\n...         for tag in page.tags:\n...             tag_name, tag_value = tag.name, tag.value\n\nOverwrite the value of an existing tag, for example, XResolution:\n>>> with TiffFile('temp.tif', mode='r+') as tif:\n...     _ = tif.pages[0].tags['XResolution'].overwrite((96000, 1000))\n\nWrite a 5-dimensional floating-point array using BigTIFF format, separate\ncolor components, tiling, Zlib compression level 8, horizontal differencing\npredictor, and additional metadata:\n>>> data = numpy.random.rand(2, 5, 3, 301, 219).astype('float32')\n>>> imwrite(\n...     'temp.tif',\n...     data,\n...     bigtiff=True,\n...     photometric='rgb',\n...     planarconfig='separate',\n...     tile=(32, 32),\n...     compression='zlib',\n...     compressionargs={'level': 8},\n...     predictor=True,\n...     metadata={'axes': 'TZCYX'}\n... )\n\nWrite a 10 fps time series of volumes with xyz voxel size 2.6755x2.6755x3.9474\nmicron^3 to an ImageJ hyperstack formatted TIFF file:\n>>> volume = numpy.random.randn(6, 57, 256, 256).astype('float32')\n>>> image_labels = [f'{i}' for i in range(volume.shape[0] * volume.shape[1])]\n>>> imwrite(\n...     'temp.tif',\n...     volume,\n...     imagej=True,\n...     resolution=(1./2.6755, 1./2.6755),\n...     metadata={\n...         'spacing': 3.947368,\n...         'unit': 'um',\n...         'finterval': 1/10,\n...         'fps': 10.0,\n...         'axes': 'TZYX',\n...         'Labels': image_labels,\n...     }\n... )\n\nRead the volume and metadata from the ImageJ hyperstack file:\n>>> with TiffFile('temp.tif') as tif:\n...     volume = tif.asarray()\n...     axes = tif.series[0].axes\n...     imagej_metadata = tif.imagej_metadata\n>>> volume.shape\n(6, 57, 256, 256)\n>>> axes\n'TZYX'\n>>> imagej_metadata['slices']\n57\n>>> imagej_metadata['frames']\n6\n\nMemory-map the contiguous image data in the ImageJ hyperstack file:\n>>> memmap_volume = memmap('temp.tif')\n>>> memmap_volume.shape\n(6, 57, 256, 256)\n>>> del memmap_volume\n\nCreate a TIFF file containing an empty image and write to the memory-mapped\nNumPy array (note: this does not work with compression or tiling):\n>>> memmap_image = memmap(\n...     'temp.tif',\n...     shape=(256, 256, 3),\n...     dtype='float32',\n...     photometric='rgb'\n... )\n>>> type(memmap_image)\n<class 'numpy.memmap'>\n>>> memmap_image[255, 255, 1] = 1.0\n>>> memmap_image.flush()\n>>> del memmap_image\n\nWrite two NumPy arrays to a multi-series TIFF file (note: other TIFF readers\nwill not recognize the two series; use the OME-TIFF format for better\ninteroperability):\n>>> series0 = numpy.random.randint(0, 255, (32, 32, 3), 'uint8')\n>>> series1 = numpy.random.randint(0, 255, (4, 256, 256), 'uint16')\n>>> with TiffWriter('temp.tif') as tif:\n...     tif.write(series0, photometric='rgb')\n...     tif.write(series1, photometric='minisblack')\n\nRead the second image series from the TIFF file:\n>>> series1 = imread('temp.tif', series=1)\n>>> series1.shape\n(4, 256, 256)\n\nSuccessively write the frames of one contiguous series to a TIFF file:\n>>> data = numpy.random.randint(0, 255, (30, 301, 219), 'uint8')\n>>> with TiffWriter('temp.tif') as tif:\n...     for frame in data:\n...         tif.write(frame, contiguous=True)\n\nAppend an image series to the existing TIFF file (note: this does not work\nwith ImageJ hyperstack or OME-TIFF files):\n>>> data = numpy.random.randint(0, 255, (301, 219, 3), 'uint8')\n>>> imwrite('temp.tif', data, photometric='rgb', append=True)\n\nCreate a TIFF file from a generator of tiles:\n>>> data = numpy.random.randint(0, 2**12, (31, 33, 3), 'uint16')\n>>> def tiles(data, tileshape):\n...     for y in range(0, data.shape[0], tileshape[0]):\n...         for x in range(0, data.shape[1], tileshape[1]):\n...             yield data[y : y + tileshape[0], x : x + tileshape[1]]\n>>> imwrite(\n...     'temp.tif',\n...     tiles(data, (16, 16)),\n...     tile=(16, 16),\n...     shape=data.shape,\n...     dtype=data.dtype,\n...     photometric='rgb'\n... )\n\nWrite a multi-dimensional, multi-resolution (pyramidal), multi-series OME-TIFF\nfile with metadata. Sub-resolution images are written to SubIFDs. Limit\nparallel encoding to 2 threads. Write a thumbnail image as a separate image\nseries:\n>>> data = numpy.random.randint(0, 255, (8, 2, 512, 512, 3), 'uint8')\n>>> subresolutions = 2\n>>> pixelsize = 0.29  # micrometer\n>>> with TiffWriter('temp.ome.tif', bigtiff=True) as tif:\n...     metadata={\n...         'axes': 'TCYXS',\n...         'SignificantBits': 10,\n...         'TimeIncrement': 0.1,\n...         'TimeIncrementUnit': 's',\n...         'PhysicalSizeX': pixelsize,\n...         'PhysicalSizeXUnit': '\u00b5m',\n...         'PhysicalSizeY': pixelsize,\n...         'PhysicalSizeYUnit': '\u00b5m',\n...         'Channel': {'Name': ['Channel 1', 'Channel 2']},\n...         'Plane': {'PositionX': [0.0] * 16, 'PositionXUnit': ['\u00b5m'] * 16}\n...     }\n...     options = dict(\n...         photometric='rgb',\n...         tile=(128, 128),\n...         compression='jpeg',\n...         resolutionunit='CENTIMETER',\n...         maxworkers=2\n...     )\n...     tif.write(\n...         data,\n...         subifds=subresolutions,\n...         resolution=(1e4 / pixelsize, 1e4 / pixelsize),\n...         metadata=metadata,\n...         **options\n...     )\n...     # write pyramid levels to the two subifds\n...     # in production use resampling to generate sub-resolution images\n...     for level in range(subresolutions):\n...         mag = 2**(level + 1)\n...         tif.write(\n...             data[..., ::mag, ::mag, :],\n...             subfiletype=1,\n...             resolution=(1e4 / mag / pixelsize, 1e4 / mag / pixelsize),\n...             **options\n...         )\n...     # add a thumbnail image as a separate series\n...     # it is recognized by QuPath as an associated image\n...     thumbnail = (data[0, 0, ::8, ::8] >> 2).astype('uint8')\n...     tif.write(thumbnail, metadata={'Name': 'thumbnail'})\n\nAccess the image levels in the pyramidal OME-TIFF file:\n>>> baseimage = imread('temp.ome.tif')\n>>> second_level = imread('temp.ome.tif', series=0, level=1)\n>>> with TiffFile('temp.ome.tif') as tif:\n...     baseimage = tif.series[0].asarray()\n...     second_level = tif.series[0].levels[1].asarray()\n\nIterate over and decode single JPEG compressed tiles in the TIFF file:\n>>> with TiffFile('temp.ome.tif') as tif:\n...     fh = tif.filehandle\n...     for page in tif.pages:\n...         for index, (offset, bytecount) in enumerate(\n...             zip(page.dataoffsets, page.databytecounts)\n...         ):\n...             _ = fh.seek(offset)\n...             data = fh.read(bytecount)\n...             tile, indices, shape = page.decode(\n...                 data, index, jpegtables=page.jpegtables\n...             )\n\nUse Zarr to read parts of the tiled, pyramidal images in the TIFF file:\n>>> import zarr\n>>> store = imread('temp.ome.tif', aszarr=True)\n>>> z = zarr.open(store, mode='r')\n>>> z\n<zarr.hierarchy.Group '/' read-only>\n>>> z[0]  # base layer\n<zarr.core.Array '/0' (8, 2, 512, 512, 3) uint8 read-only>\n>>> z[0][2, 0, 128:384, 256:].shape  # read a tile from the base layer\n(256, 256, 3)\n>>> store.close()\n\nLoad the base layer from the Zarr store as a dask array:\n>>> import dask.array\n>>> store = imread('temp.ome.tif', aszarr=True)\n>>> dask.array.from_zarr(store, 0)\ndask.array<...shape=(8, 2, 512, 512, 3)...chunksize=(1, 1, 128, 128, 3)...\n>>> store.close()\n\nWrite the Zarr store to a fsspec ReferenceFileSystem in JSON format:\n>>> store = imread('temp.ome.tif', aszarr=True)\n>>> store.write_fsspec('temp.ome.tif.json', url='file://')\n>>> store.close()\n\nOpen the fsspec ReferenceFileSystem as a Zarr group:\n>>> import fsspec\n>>> import imagecodecs.numcodecs\n>>> imagecodecs.numcodecs.register_codecs()\n>>> mapper = fsspec.get_mapper(\n...     'reference://', fo='temp.ome.tif.json', target_protocol='file'\n... )\n>>> z = zarr.open(mapper, mode='r')\n>>> z\n<zarr.hierarchy.Group '/' read-only>\n\nCreate an OME-TIFF file containing an empty, tiled image series and write\nto it via the Zarr interface (note: this does not work with compression):\n>>> imwrite(\n...     'temp.ome.tif',\n...     shape=(8, 800, 600),\n...     dtype='uint16',\n...     photometric='minisblack',\n...     tile=(128, 128),\n...     metadata={'axes': 'CYX'}\n... )\n>>> store = imread('temp.ome.tif', mode='r+', aszarr=True)\n>>> z = zarr.open(store, mode='r+')\n>>> z\n<zarr.core.Array (8, 800, 600) uint16>\n>>> z[3, 100:200, 200:300:2] = 1024\n>>> store.close()\n\nRead images from a sequence of TIFF files as NumPy array using two I/O worker\nthreads:\n>>> imwrite('temp_C001T001.tif', numpy.random.rand(64, 64))\n>>> imwrite('temp_C001T002.tif', numpy.random.rand(64, 64))\n>>> image_sequence = imread(\n...     ['temp_C001T001.tif', 'temp_C001T002.tif'], ioworkers=2, maxworkers=1\n... )\n>>> image_sequence.shape\n(2, 64, 64)\n>>> image_sequence.dtype\ndtype('float64')\n\nRead an image stack from a series of TIFF files with a file name pattern\nas NumPy or Zarr arrays:\n>>> image_sequence = TiffSequence(\n...     'temp_C0*.tif', pattern=r'_(C)(\\d+)(T)(\\d+)'\n... )\n>>> image_sequence.shape\n(1, 2)\n>>> image_sequence.axes\n'CT'\n>>> data = image_sequence.asarray()\n>>> data.shape\n(1, 2, 64, 64)\n>>> store = image_sequence.aszarr()\n>>> zarr.open(store, mode='r')\n<zarr.core.Array (1, 2, 64, 64) float64 read-only>\n>>> image_sequence.close()\n\nWrite the Zarr store to a fsspec ReferenceFileSystem in JSON format:\n>>> store = image_sequence.aszarr()\n>>> store.write_fsspec('temp.json', url='file://')\n\nOpen the fsspec ReferenceFileSystem as a Zarr array:\n>>> import fsspec\n>>> import tifffile.numcodecs\n>>> tifffile.numcodecs.register_codec()\n>>> mapper = fsspec.get_mapper(\n...     'reference://', fo='temp.json', target_protocol='file'\n... )\n>>> zarr.open(mapper, mode='r')\n<zarr.core.Array (1, 2, 64, 64) float64 read-only>\n\nInspect the TIFF file from the command line:\n$ python -m tifffile temp.ome.tif\n\n", "description": "Read and write image data from TIFF files."}, {"name": "thrift", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n", "description": "Python bindings for the Apache Thrift RPC system."}, {"name": "threadpoolctl", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThread-pool Controls \nInstallation\nUsage\nCommand Line Interface\nPython Runtime Programmatic Introspection\nSetting the Maximum Size of Thread-Pools\nRestricting the limits to the scope of a function\nWriting a custom library controller\nSequential BLAS within OpenMP parallel region\nKnown Limitations\nMaintainers\nCredits\n\n\n\n\n\nREADME.md\n\n\n\n\nThread-pool Controls  \nPython helpers to limit the number of threads used in the\nthreadpool-backed of common native libraries used for scientific\ncomputing and data science (e.g. BLAS and OpenMP).\nFine control of the underlying thread-pool size can be useful in\nworkloads that involve nested parallelism so as to mitigate\noversubscription issues.\nInstallation\n\n\nFor users, install the last published version from PyPI:\npip install threadpoolctl\n\n\nFor contributors, install from the source repository in developer\nmode:\npip install -r dev-requirements.txt\nflit install --symlink\nthen you run the tests with pytest:\npytest\n\n\nUsage\nCommand Line Interface\nGet a JSON description of thread-pools initialized when importing python\npackages such as numpy or scipy for instance:\npython -m threadpoolctl -i numpy scipy.linalg\n[\n  {\n    \"filepath\": \"/home/ogrisel/miniconda3/envs/tmp/lib/libmkl_rt.so\",\n    \"prefix\": \"libmkl_rt\",\n    \"user_api\": \"blas\",\n    \"internal_api\": \"mkl\",\n    \"version\": \"2019.0.4\",\n    \"num_threads\": 2,\n    \"threading_layer\": \"intel\"\n  },\n  {\n    \"filepath\": \"/home/ogrisel/miniconda3/envs/tmp/lib/libiomp5.so\",\n    \"prefix\": \"libiomp\",\n    \"user_api\": \"openmp\",\n    \"internal_api\": \"openmp\",\n    \"version\": null,\n    \"num_threads\": 4\n  }\n]\n\nThe JSON information is written on STDOUT. If some of the packages are missing,\na warning message is displayed on STDERR.\nPython Runtime Programmatic Introspection\nIntrospect the current state of the threadpool-enabled runtime libraries\nthat are loaded when importing Python packages:\n>>> from threadpoolctl import threadpool_info\n>>> from pprint import pprint\n>>> pprint(threadpool_info())\n[]\n\n>>> import numpy\n>>> pprint(threadpool_info())\n[{'filepath': '/home/ogrisel/miniconda3/envs/tmp/lib/libmkl_rt.so',\n  'internal_api': 'mkl',\n  'num_threads': 2,\n  'prefix': 'libmkl_rt',\n  'threading_layer': 'intel',\n  'user_api': 'blas',\n  'version': '2019.0.4'},\n {'filepath': '/home/ogrisel/miniconda3/envs/tmp/lib/libiomp5.so',\n  'internal_api': 'openmp',\n  'num_threads': 4,\n  'prefix': 'libiomp',\n  'user_api': 'openmp',\n  'version': None}]\n\n>>> import xgboost\n>>> pprint(threadpool_info())\n[{'filepath': '/home/ogrisel/miniconda3/envs/tmp/lib/libmkl_rt.so',\n  'internal_api': 'mkl',\n  'num_threads': 2,\n  'prefix': 'libmkl_rt',\n  'threading_layer': 'intel',\n  'user_api': 'blas',\n  'version': '2019.0.4'},\n {'filepath': '/home/ogrisel/miniconda3/envs/tmp/lib/libiomp5.so',\n  'internal_api': 'openmp',\n  'num_threads': 4,\n  'prefix': 'libiomp',\n  'user_api': 'openmp',\n  'version': None},\n {'filepath': '/home/ogrisel/miniconda3/envs/tmp/lib/libgomp.so.1.0.0',\n  'internal_api': 'openmp',\n  'num_threads': 4,\n  'prefix': 'libgomp',\n  'user_api': 'openmp',\n  'version': None}]\nIn the above example, numpy was installed from the default anaconda channel and comes\nwith MKL and its Intel OpenMP (libiomp5) implementation while xgboost was installed\nfrom pypi.org and links against GNU OpenMP (libgomp) so both OpenMP runtimes are\nloaded in the same Python program.\nThe state of these libraries is also accessible through the object oriented API:\n>>> from threadpoolctl import ThreadpoolController, threadpool_info\n>>> from pprint import pprint\n>>> import numpy\n>>> controller = ThreadpoolController()\n>>> pprint(controller.info())\n[{'architecture': 'Haswell',\n  'filepath': '/home/jeremie/miniconda/envs/dev/lib/libopenblasp-r0.3.17.so',\n  'internal_api': 'openblas',\n  'num_threads': 4,\n  'prefix': 'libopenblas',\n  'threading_layer': 'pthreads',\n  'user_api': 'blas',\n  'version': '0.3.17'}]\n\n>>> controller.info() == threadpool_info()\nTrue\nSetting the Maximum Size of Thread-Pools\nControl the number of threads used by the underlying runtime libraries\nin specific sections of your Python program:\n>>> from threadpoolctl import threadpool_limits\n>>> import numpy as np\n\n>>> with threadpool_limits(limits=1, user_api='blas'):\n...     # In this block, calls to blas implementation (like openblas or MKL)\n...     # will be limited to use only one thread. They can thus be used jointly\n...     # with thread-parallelism.\n...     a = np.random.randn(1000, 1000)\n...     a_squared = a @ a\nThe threadpools can also be controlled via the object oriented API, which is especially\nuseful to avoid searching through all the loaded shared libraries each time. It will\nhowever not act on libraries loaded after the instantiation of the\nThreadpoolController:\n>>> from threadpoolctl import ThreadpoolController\n>>> import numpy as np\n>>> controller = ThreadpoolController()\n\n>>> with controller.limit(limits=1, user_api='blas'):\n...     a = np.random.randn(1000, 1000)\n...     a_squared = a @ a\nRestricting the limits to the scope of a function\nthreadpool_limits and ThreadpoolController can also be used as decorators to set\nthe maximum number of threads used by the supported libraries at a function level. The\ndecorators are accessible through their wrap method:\n>>> from threadpoolctl import ThreadpoolController, threadpool_limits\n>>> import numpy as np\n>>> controller = ThreadpoolController()\n\n>>> @controller.wrap(limits=1, user_api='blas')\n... # or @threadpool_limits.wrap(limits=1, user_api='blas')\n... def my_func():\n...     # Inside this function, calls to blas implementation (like openblas or MKL)\n...     # will be limited to use only one thread.\n...     a = np.random.randn(1000, 1000)\n...     a_squared = a @ a\n...\nWriting a custom library controller\nCurrently, threadpoolctl has support for OpenMP and the main BLAS libraries.\nHowever it can also be used to control the threadpool of other native libraries,\nprovided that they expose an API to get and set the limit on the number of threads.\nFor that, one must implement a controller for this library and register it to\nthreadpoolctl.\nA custom controller must be a subclass of the LibController class and implement\nthe attributes and methods described in the docstring of LibController. Then this\nnew controller class must be registered using the threadpoolctl.register function.\nAn complete example can be found here.\nSequential BLAS within OpenMP parallel region\nWhen one wants to have sequential BLAS calls within an OpenMP parallel region, it's\nsafer to set limits=\"sequential_blas_under_openmp\" since setting limits=1 and\nuser_api=\"blas\" might not lead to the expected behavior in some configurations\n(e.g. OpenBLAS with the OpenMP threading layer\nxianyi/OpenBLAS#2985).\nKnown Limitations\n\n\nthreadpool_limits can fail to limit the number of inner threads when nesting\nparallel loops managed by distinct OpenMP runtime implementations (for instance\nlibgomp from GCC and libomp from clang/llvm or libiomp from ICC).\nSee the test_openmp_nesting function in tests/test_threadpoolctl.py\nfor an example. More information can be found at:\nhttps://github.com/jeremiedbb/Nested_OpenMP\nNote however that this problem does not happen when threadpool_limits is\nused to limit the number of threads used internally by BLAS calls that are\nthemselves nested under OpenMP parallel loops. threadpool_limits works as\nexpected, even if the inner BLAS implementation relies on a distinct OpenMP\nimplementation.\n\n\nUsing Intel OpenMP (ICC) and LLVM OpenMP (clang) in the same Python program\nunder Linux is known to cause problems. See the following guide for more details\nand workarounds:\nhttps://github.com/joblib/threadpoolctl/blob/master/multiple_openmp.md\n\n\nSetting the maximum number of threads of the OpenMP and BLAS libraries has a global\neffect and impacts the whole Python process. There is no thread level isolation as\nthese libraries do not offer thread-local APIs to configure the number of threads to\nuse in nested parallel calls.\n\n\nMaintainers\nTo make a release:\nBump the version number (__version__) in threadpoolctl.py.\nBuild the distribution archives:\npip install flit\nflit build\nCheck the contents of dist/.\nIf everything is fine, make a commit for the release, tag it, push the\ntag to github and then:\nflit publish\nCredits\nThe initial dynamic library introspection code was written by @anton-malakhov\nfor the smp package available at https://github.com/IntelPython/smp .\nthreadpoolctl extends this for other operating systems. Contrary to smp,\nthreadpoolctl does not attempt to limit the size of Python multiprocessing\npools (threads or processes) or set operating system-level CPU affinity\nconstraints: threadpoolctl only interacts with native libraries via their\npublic runtime APIs.\n\n\n", "description": "Limit number of threads used by libraries like NumPy."}, {"name": "thinc", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThinc: A refreshing functional take on deep learning, compatible with your favorite libraries\nFrom the makers of spaCy and Prodigy\n\ud83d\udd25 Features\n\ud83d\ude80 Quickstart\n\ud83d\udcd3 Selected examples and notebooks\n\ud83d\udcd6 Documentation & usage guides\n\ud83d\uddfa What's where\n\ud83d\udc0d Development notes\n\ud83d\udc77\u200d\u2640\ufe0f Building Thinc from source\n\ud83d\udea6 Running tests\n\n\n\n\n\nREADME.md\n\n\n\n\n\nThinc: A refreshing functional take on deep learning, compatible with your favorite libraries\nFrom the makers of spaCy and Prodigy\nThinc is a lightweight deep learning library that offers\nan elegant, type-checked, functional-programming API for composing models,\nwith support for layers defined in other frameworks such as PyTorch,\nTensorFlow and MXNet. You can use Thinc as an interface layer, a standalone\ntoolkit or a flexible way to develop new models. Previous versions of Thinc have\nbeen running quietly in production in thousands of companies, via both\nspaCy and Prodigy. We wrote the new\nversion to let users compose, configure and deploy custom models built with\ntheir favorite framework.\n\n\n\n\n\n\n\n\ud83d\udd25 Features\n\nType-check your model definitions with custom types and\nmypy plugin.\nWrap PyTorch, TensorFlow and MXNet models for use in your network.\nConcise functional-programming approach to model definition, using\ncomposition rather than inheritance.\nOptional custom infix notation via operator overloading.\nIntegrated config system to describe trees of objects and hyperparameters.\nChoice of extensible backends.\nRead more \u2192\n\n\ud83d\ude80 Quickstart\nThinc is compatible with Python 3.6+ and runs on Linux, macOS and\nWindows. The latest releases with binary wheels are available from\npip. Before you install Thinc and its\ndependencies, make sure that your pip, setuptools and wheel are up to\ndate. For the most recent releases, pip 19.3 or newer is recommended.\npip install -U pip setuptools wheel\npip install thinc\nSee the extended installation docs for\ndetails on optional dependencies for different backends and GPU. You might also\nwant to\nset up static type checking to\ntake advantage of Thinc's type system.\n\n\u26a0\ufe0f If you have installed PyTorch and you are using Python 3.7+, uninstall the\npackage dataclasses with pip uninstall dataclasses, since it may have been\ninstalled by PyTorch and is incompatible with Python 3.7+.\n\n\ud83d\udcd3 Selected examples and notebooks\nAlso see the /examples directory and\nusage documentation for more examples. Most examples\nare Jupyter notebooks \u2013 to launch them on\nGoogle Colab (with GPU support!) click on\nthe button next to the notebook name.\n\n\n\nNotebook\nDescription\n\n\n\n\nintro_to_thinc\nEverything you need to know to get started. Composing and training a model on the MNIST data, using config files, registering custom functions and wrapping PyTorch, TensorFlow and MXNet models.\n\n\ntransformers_tagger_bert\nHow to use Thinc, transformers and PyTorch to train a part-of-speech tagger. From model definition and config to the training loop.\n\n\npos_tagger_basic_cnn\nImplementing and training a basic CNN for part-of-speech tagging model without external dependencies and using different levels of Thinc's config system.\n\n\nparallel_training_ray\nHow to set up synchronous and asynchronous parameter server training with Thinc and Ray.\n\n\n\nView more \u2192\n\ud83d\udcd6 Documentation & usage guides\n\n\n\nDocumentation\nDescription\n\n\n\n\nIntroduction\nEverything you need to know.\n\n\nConcept & Design\nThinc's conceptual model and how it works.\n\n\nDefining and using models\nHow to compose models and update state.\n\n\nConfiguration system\nThinc's config system and function registry.\n\n\nIntegrating PyTorch, TensorFlow & MXNet\nInteroperability with machine learning frameworks\n\n\nLayers API\nWeights layers, transforms, combinators and wrappers.\n\n\nType Checking\nType-check your model definitions and more.\n\n\n\n\ud83d\uddfa What's where\n\n\n\nModule\nDescription\n\n\n\n\nthinc.api\nUser-facing API. All classes and functions should be imported from here.\n\n\nthinc.types\nCustom types and dataclasses.\n\n\nthinc.model\nThe Model class. All Thinc models are an instance (not a subclass) of Model.\n\n\nthinc.layers\nThe layers. Each layer is implemented in its own module.\n\n\nthinc.shims\nInterface for external models implemented in PyTorch, TensorFlow etc.\n\n\nthinc.loss\nFunctions to calculate losses.\n\n\nthinc.optimizers\nFunctions to create optimizers. Currently supports \"vanilla\" SGD, Adam and RAdam.\n\n\nthinc.schedules\nGenerators for different rates, schedules, decays or series.\n\n\nthinc.backends\nBackends for numpy and cupy.\n\n\nthinc.config\nConfig parsing and validation and function registry system.\n\n\nthinc.util\nUtilities and helper functions.\n\n\n\n\ud83d\udc0d Development notes\nThinc uses black for auto-formatting,\nflake8 for linting and\nmypy for type checking. All code is\nwritten compatible with Python 3.6+, with type hints wherever possible. See\nthe type reference for more details on\nThinc's custom types.\n\ud83d\udc77\u200d\u2640\ufe0f Building Thinc from source\nBuilding Thinc from source requires the full dependencies listed in\nrequirements.txt to be installed. You'll also need a\ncompiler to build the C extensions.\ngit clone https://github.com/explosion/thinc\ncd thinc\npython -m venv .env\nsource .env/bin/activate\npip install -U pip setuptools wheel\npip install -r requirements.txt\npip install --no-build-isolation .\nAlternatively, install in editable mode:\npip install -r requirements.txt\npip install --no-build-isolation --editable .\nOr by setting PYTHONPATH:\nexport PYTHONPATH=`pwd`\npip install -r requirements.txt\npython setup.py build_ext --inplace\n\ud83d\udea6 Running tests\nThinc comes with an extensive test suite. The following should\nall pass and not report any warnings or errors:\npython -m pytest thinc    # test suite\npython -m mypy thinc      # type checks\npython -m flake8 thinc    # linting\nTo view test coverage, you can run python -m pytest thinc --cov=thinc. We aim\nfor a 100% test coverage. This doesn't mean that we meticulously write tests for\nevery single line \u2013 we ignore blocks that are not relevant or difficult to test\nand make sure that the tests execute all code paths.\n\n\n", "description": "Functional deep learning library.", "category": "Machine learning"}, {"name": "Theano-PyMC", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n"}, {"name": "textract", "readme": "\n\n\n\nREADME.rst\n\n\n\n\ntextract\nExtract text from any document. No muss. No fuss.\nFull documentation.\n \n \n  \n\n \n \n\n\n\n", "description": "Extract text from documents"}, {"name": "textblob", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nTextBlob: Simplified Text Processing\nFeatures\nGet it now\nExamples\nDocumentation\nRequirements\nProject Links\nLicense\n\n\n\n\n\nREADME.rst\n\n\n\n\nTextBlob: Simplified Text Processing\n\n\nHomepage: https://textblob.readthedocs.io/\nTextBlob is a Python (2 and 3) library for processing textual data. It provides a simple API for diving into common natural language processing (NLP) tasks such as part-of-speech tagging, noun phrase extraction, sentiment analysis, classification, translation, and more.\nfrom textblob import TextBlob\n\ntext = '''\nThe titular threat of The Blob has always struck me as the ultimate movie\nmonster: an insatiably hungry, amoeba-like mass able to penetrate\nvirtually any safeguard, capable of--as a doomed doctor chillingly\ndescribes it--\"assimilating flesh on contact.\nSnide comparisons to gelatin be damned, it's a concept with the most\ndevastating of potential consequences, not unlike the grey goo scenario\nproposed by technological theorists fearful of\nartificial intelligence run rampant.\n'''\n\nblob = TextBlob(text)\nblob.tags           # [('The', 'DT'), ('titular', 'JJ'),\n                    #  ('threat', 'NN'), ('of', 'IN'), ...]\n\nblob.noun_phrases   # WordList(['titular threat', 'blob',\n                    #            'ultimate movie monster',\n                    #            'amoeba-like mass', ...])\n\nfor sentence in blob.sentences:\n    print(sentence.sentiment.polarity)\n# 0.060\n# -0.341\nTextBlob stands on the giant shoulders of NLTK and pattern, and plays nicely with both.\n\nFeatures\n\nNoun phrase extraction\nPart-of-speech tagging\nSentiment analysis\nClassification (Naive Bayes, Decision Tree)\nTokenization (splitting text into words and sentences)\nWord and phrase frequencies\nParsing\nn-grams\nWord inflection (pluralization and singularization) and lemmatization\nSpelling correction\nAdd new models or languages through extensions\nWordNet integration\n\n\nGet it now\n$ pip install -U textblob\n$ python -m textblob.download_corpora\n\n\nExamples\nSee more examples at the Quickstart guide.\n\nDocumentation\nFull documentation is available at https://textblob.readthedocs.io/.\n\nRequirements\n\nPython >= 2.7 or >= 3.5\n\n\nProject Links\n\nDocs: https://textblob.readthedocs.io/\nChangelog: https://textblob.readthedocs.io/en/latest/changelog.html\nPyPI: https://pypi.python.org/pypi/TextBlob\nIssues: https://github.com/sloria/TextBlob/issues\n\n\nLicense\nMIT licensed. See the bundled LICENSE file for more details.\n\n\n", "description": "Provides natural language processing tools"}, {"name": "text-unidecode", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nText-Unidecode\nInstallation\nUsage\n\n\n\n\n\nREADME.rst\n\n\n\n\nText-Unidecode\n\ntext-unidecode is the most basic port of the\nText::Unidecode\nPerl library.\nThere are other Python ports of Text::Unidecode (unidecode\nand isounidecode). unidecode is GPL; isounidecode uses too much memory,\nand it didn't support Python 3 when this package was created.\nYou can redistribute it and/or modify this port under the terms of either:\n\nArtistic License, or\nGPL or GPLv2+\n\nIf you're OK with GPL-only, use unidecode (it has better memory usage and\nbetter transliteration quality).\ntext-unidecode supports Python 2.7 and 3.4+.\n\nInstallation\npip install text-unidecode\n\n\nUsage\n>>> from text_unidecode import unidecode\n>>> unidecode(u'\u043a\u0430\u043a\u043e\u0439-\u0442\u043e \u0442\u0435\u043a\u0441\u0442')\n'kakoi-to tekst'\n\n\n\n"}, {"name": "terminado", "readme": "\n\n\n\nREADME.md\n\n\n\n\nTerminado\n\n\nThis is a Tornado websocket backend for the\nXterm.js Javascript terminal emulator library.\nIt evolved out of pyxterm, which\nwas part of GraphTerm (as\nlineterm.py), v0.57.0 (2014-07-18), and ultimately derived from the\npublic-domain Ajaxterm\ncode, v0.11 (2008-11-13) (also on Github as part of\nQWeb).\nModules:\n\nterminado.management: controls launching virtual terminals,\nconnecting them to Tornado's event loop, and closing them down.\nterminado.websocket: Provides a websocket handler for\ncommunicating with a terminal.\nterminado.uimodule: Provides a Terminal Tornado UI\nModule.\n\nJS:\n\nterminado/_static/terminado.js: A lightweight wrapper to set up a\nterm.js terminal with a websocket.\n\nLocal Installation:\n\n$ pip install -e .[test]\n\nUsage example:\nimport os.path\nimport tornado.web\nimport tornado.ioloop\n# This demo requires tornado_xstatic and XStatic-term.js\nimport tornado_xstatic\n\nimport terminado\nSTATIC_DIR = os.path.join(os.path.dirname(terminado.__file__), \"_static\")\n\nclass TerminalPageHandler(tornado.web.RequestHandler):\n    def get(self):\n        return self.render(\"termpage.html\", static=self.static_url,\n                           xstatic=self.application.settings['xstatic_url'],\n                           ws_url_path=\"/websocket\")\n\nif __name__ == '__main__':\n    term_manager = terminado.SingleTermManager(shell_command=['bash'])\n    handlers = [\n                (r\"/websocket\", terminado.TermSocket,\n                     {'term_manager': term_manager}),\n                (r\"/\", TerminalPageHandler),\n                (r\"/xstatic/(.*)\", tornado_xstatic.XStaticFileHandler,\n                     {'allowed_modules': ['termjs']})\n               ]\n    app = tornado.web.Application(handlers, static_path=STATIC_DIR,\n                      xstatic_url = tornado_xstatic.url_maker('/xstatic/'))\n    # Serve at http://localhost:8765/ N.B. Leaving out 'localhost' here will\n    # work, but it will listen on the public network interface as well.\n    # Given what terminado does, that would be rather a security hole.\n    app.listen(8765, 'localhost')\n    try:\n        tornado.ioloop.IOLoop.instance().start()\n    finally:\n        term_manager.shutdown()\nSee the demos\ndirectory for\nmore examples. This is a simplified version of the single.py demo.\nRun the unit tests with:\n\n$ pytest\n\n\n\n", "description": "Terminals served by Tornado websockets."}, {"name": "tenacity", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nTenacity\nFeatures\nInstallation\nExamples\nBasic Retry\nStopping\nWaiting before retrying\nWhether to retry\nError Handling\nBefore and After Retry, and Logging\nStatistics\nCustom Callbacks\nRetryCallState\nOther Custom Callbacks\nChanging Arguments at Run Time\nRetrying code block\nAsync and retry\nContribute\nChangelogs\n\n\n\n\n\nREADME.rst\n\n\n\n\nTenacity\n\n\n\n\nPlease refer to the tenacity documentation for a better experience.\nTenacity is an Apache 2.0 licensed general-purpose retrying library, written in\nPython, to simplify the task of adding retry behavior to just about anything.\nIt originates from a fork of retrying which is sadly no longer\nmaintained. Tenacity isn't\napi compatible with retrying but adds significant new functionality and\nfixes a number of longstanding bugs.\nThe simplest use case is retrying a flaky function whenever an Exception\noccurs until a value is returned.\n.. testcode::\n\n    import random\n    from tenacity import retry\n\n    @retry\n    def do_something_unreliable():\n        if random.randint(0, 10) > 1:\n            raise IOError(\"Broken sauce, everything is hosed!!!111one\")\n        else:\n            return \"Awesome sauce!\"\n\n    print(do_something_unreliable())\n\n\n.. testoutput::\n   :hide:\n\n   Awesome sauce!\n\n\n\n.. toctree::\n    :hidden:\n    :maxdepth: 2\n\n    changelog\n    api\n\n\n\n\nFeatures\n\nGeneric Decorator API\nSpecify stop condition (i.e. limit by number of attempts)\nSpecify wait condition (i.e. exponential backoff sleeping between attempts)\nCustomize retrying on Exceptions\nCustomize retrying on expected returned result\nRetry on coroutines\nRetry code block with context manager\n\n\nInstallation\nTo install tenacity, simply:\n$ pip install tenacity\n\nExamples\n\nBasic Retry\n.. testsetup:: *\n\n    import logging\n    #\n    # Note the following import is used for demonstration convenience only.\n    # Production code should always explicitly import the names it needs.\n    #\n    from tenacity import *\n\n    class MyException(Exception):\n        pass\n\n\nAs you saw above, the default behavior is to retry forever without waiting when\nan exception is raised.\n.. testcode::\n\n    @retry\n    def never_gonna_give_you_up():\n        print(\"Retry forever ignoring Exceptions, don't wait between retries\")\n        raise Exception\n\n\n\nStopping\nLet's be a little less persistent and set some boundaries, such as the number\nof attempts before giving up.\n.. testcode::\n\n    @retry(stop=stop_after_attempt(7))\n    def stop_after_7_attempts():\n        print(\"Stopping after 7 attempts\")\n        raise Exception\n\n\nWe don't have all day, so let's set a boundary for how long we should be\nretrying stuff.\n.. testcode::\n\n    @retry(stop=stop_after_delay(10))\n    def stop_after_10_s():\n        print(\"Stopping after 10 seconds\")\n        raise Exception\n\n\nYou can combine several stop conditions by using the | operator:\n.. testcode::\n\n    @retry(stop=(stop_after_delay(10) | stop_after_attempt(5)))\n    def stop_after_10_s_or_5_retries():\n        print(\"Stopping after 10 seconds or 5 retries\")\n        raise Exception\n\n\n\nWaiting before retrying\nMost things don't like to be polled as fast as possible, so let's just wait 2\nseconds between retries.\n.. testcode::\n\n    @retry(wait=wait_fixed(2))\n    def wait_2_s():\n        print(\"Wait 2 second between retries\")\n        raise Exception\n\n\nSome things perform best with a bit of randomness injected.\n.. testcode::\n\n    @retry(wait=wait_random(min=1, max=2))\n    def wait_random_1_to_2_s():\n        print(\"Randomly wait 1 to 2 seconds between retries\")\n        raise Exception\n\n\nThen again, it's hard to beat exponential backoff when retrying distributed\nservices and other remote endpoints.\n.. testcode::\n\n    @retry(wait=wait_exponential(multiplier=1, min=4, max=10))\n    def wait_exponential_1():\n        print(\"Wait 2^x * 1 second between each retry starting with 4 seconds, then up to 10 seconds, then 10 seconds afterwards\")\n        raise Exception\n\n\n\nThen again, it's also hard to beat combining fixed waits and jitter (to\nhelp avoid thundering herds) when retrying distributed services and other\nremote endpoints.\n.. testcode::\n\n    @retry(wait=wait_fixed(3) + wait_random(0, 2))\n    def wait_fixed_jitter():\n        print(\"Wait at least 3 seconds, and add up to 2 seconds of random delay\")\n        raise Exception\n\n\nWhen multiple processes are in contention for a shared resource, exponentially\nincreasing jitter helps minimise collisions.\n.. testcode::\n\n    @retry(wait=wait_random_exponential(multiplier=1, max=60))\n    def wait_exponential_jitter():\n        print(\"Randomly wait up to 2^x * 1 seconds between each retry until the range reaches 60 seconds, then randomly up to 60 seconds afterwards\")\n        raise Exception\n\n\n\nSometimes it's necessary to build a chain of backoffs.\n.. testcode::\n\n    @retry(wait=wait_chain(*[wait_fixed(3) for i in range(3)] +\n                           [wait_fixed(7) for i in range(2)] +\n                           [wait_fixed(9)]))\n    def wait_fixed_chained():\n        print(\"Wait 3s for 3 attempts, 7s for the next 2 attempts and 9s for all attempts thereafter\")\n        raise Exception\n\n\n\nWhether to retry\nWe have a few options for dealing with retries that raise specific or general\nexceptions, as in the cases here.\n.. testcode::\n\n    class ClientError(Exception):\n        \"\"\"Some type of client error.\"\"\"\n\n    @retry(retry=retry_if_exception_type(IOError))\n    def might_io_error():\n        print(\"Retry forever with no wait if an IOError occurs, raise any other errors\")\n        raise Exception\n\n    @retry(retry=retry_if_not_exception_type(ClientError))\n    def might_client_error():\n        print(\"Retry forever with no wait if any error other than ClientError occurs. Immediately raise ClientError.\")\n        raise Exception\n\n\nWe can also use the result of the function to alter the behavior of retrying.\n.. testcode::\n\n    def is_none_p(value):\n        \"\"\"Return True if value is None\"\"\"\n        return value is None\n\n    @retry(retry=retry_if_result(is_none_p))\n    def might_return_none():\n        print(\"Retry with no wait if return value is None\")\n\n\nSee also these methods:\n.. testcode::\n\n    retry_if_exception\n    retry_if_exception_type\n    retry_if_not_exception_type\n    retry_unless_exception_type\n    retry_if_result\n    retry_if_not_result\n    retry_if_exception_message\n    retry_if_not_exception_message\n    retry_any\n    retry_all\n\n\nWe can also combine several conditions:\n.. testcode::\n\n    def is_none_p(value):\n        \"\"\"Return True if value is None\"\"\"\n        return value is None\n\n    @retry(retry=(retry_if_result(is_none_p) | retry_if_exception_type()))\n    def might_return_none():\n        print(\"Retry forever ignoring Exceptions with no wait if return value is None\")\n\n\nAny combination of stop, wait, etc. is also supported to give you the freedom\nto mix and match.\nIt's also possible to retry explicitly at any time by raising the TryAgain\nexception:\n.. testcode::\n\n   @retry\n   def do_something():\n       result = something_else()\n       if result == 23:\n          raise TryAgain\n\n\n\nError Handling\nNormally when your function fails its final time (and will not be retried again based on your settings),\na RetryError is raised. The exception your code encountered will be shown somewhere in the middle\nof the stack trace.\nIf you would rather see the exception your code encountered at the end of the stack trace (where it\nis most visible), you can set reraise=True.\n.. testcode::\n\n    @retry(reraise=True, stop=stop_after_attempt(3))\n    def raise_my_exception():\n        raise MyException(\"Fail\")\n\n    try:\n        raise_my_exception()\n    except MyException:\n        # timed out retrying\n        pass\n\n\n\nBefore and After Retry, and Logging\nIt's possible to execute an action before any attempt of calling the function\nby using the before callback function:\n.. testcode::\n\n    import logging\n    import sys\n\n    logging.basicConfig(stream=sys.stderr, level=logging.DEBUG)\n\n    logger = logging.getLogger(__name__)\n\n    @retry(stop=stop_after_attempt(3), before=before_log(logger, logging.DEBUG))\n    def raise_my_exception():\n        raise MyException(\"Fail\")\n\n\nIn the same spirit, It's possible to execute after a call that failed:\n.. testcode::\n\n    import logging\n    import sys\n\n    logging.basicConfig(stream=sys.stderr, level=logging.DEBUG)\n\n    logger = logging.getLogger(__name__)\n\n    @retry(stop=stop_after_attempt(3), after=after_log(logger, logging.DEBUG))\n    def raise_my_exception():\n        raise MyException(\"Fail\")\n\n\nIt's also possible to only log failures that are going to be retried. Normally\nretries happen after a wait interval, so the keyword argument is called\nbefore_sleep:\n.. testcode::\n\n    import logging\n    import sys\n\n    logging.basicConfig(stream=sys.stderr, level=logging.DEBUG)\n\n    logger = logging.getLogger(__name__)\n\n    @retry(stop=stop_after_attempt(3),\n           before_sleep=before_sleep_log(logger, logging.DEBUG))\n    def raise_my_exception():\n        raise MyException(\"Fail\")\n\n\n\n\nStatistics\nYou can access the statistics about the retry made over a function by using the\nretry attribute attached to the function and its statistics attribute:\n.. testcode::\n\n    @retry(stop=stop_after_attempt(3))\n    def raise_my_exception():\n        raise MyException(\"Fail\")\n\n    try:\n        raise_my_exception()\n    except Exception:\n        pass\n\n    print(raise_my_exception.retry.statistics)\n\n\n.. testoutput::\n   :hide:\n\n   ...\n\n\n\nCustom Callbacks\nYou can also define your own callbacks. The callback should accept one\nparameter called retry_state that contains all information about current\nretry invocation.\nFor example, you can call a custom callback function after all retries failed,\nwithout raising an exception (or you can re-raise or do anything really)\n.. testcode::\n\n    def return_last_value(retry_state):\n        \"\"\"return the result of the last call attempt\"\"\"\n        return retry_state.outcome.result()\n\n    def is_false(value):\n        \"\"\"Return True if value is False\"\"\"\n        return value is False\n\n    # will return False after trying 3 times to get a different result\n    @retry(stop=stop_after_attempt(3),\n           retry_error_callback=return_last_value,\n           retry=retry_if_result(is_false))\n    def eventually_return_false():\n        return False\n\n\n\nRetryCallState\nretry_state argument is an object of RetryCallState class:\n.. autoclass:: tenacity.RetryCallState\n\n   Constant attributes:\n\n   .. autoattribute:: start_time(float)\n      :annotation:\n\n   .. autoattribute:: retry_object(BaseRetrying)\n      :annotation:\n\n   .. autoattribute:: fn(callable)\n      :annotation:\n\n   .. autoattribute:: args(tuple)\n      :annotation:\n\n   .. autoattribute:: kwargs(dict)\n      :annotation:\n\n   Variable attributes:\n\n   .. autoattribute:: attempt_number(int)\n      :annotation:\n\n   .. autoattribute:: outcome(tenacity.Future or None)\n      :annotation:\n\n   .. autoattribute:: outcome_timestamp(float or None)\n      :annotation:\n\n   .. autoattribute:: idle_for(float)\n      :annotation:\n\n   .. autoattribute:: next_action(tenacity.RetryAction or None)\n      :annotation:\n\n\n\nOther Custom Callbacks\nIt's also possible to define custom callbacks for other keyword arguments.\n.. function:: my_stop(retry_state)\n\n   :param RetryState retry_state: info about current retry invocation\n   :return: whether or not retrying should stop\n   :rtype: bool\n\n\n.. function:: my_wait(retry_state)\n\n   :param RetryState retry_state: info about current retry invocation\n   :return: number of seconds to wait before next retry\n   :rtype: float\n\n\n.. function:: my_retry(retry_state)\n\n   :param RetryState retry_state: info about current retry invocation\n   :return: whether or not retrying should continue\n   :rtype: bool\n\n\n.. function:: my_before(retry_state)\n\n   :param RetryState retry_state: info about current retry invocation\n\n\n.. function:: my_after(retry_state)\n\n   :param RetryState retry_state: info about current retry invocation\n\n\n.. function:: my_before_sleep(retry_state)\n\n   :param RetryState retry_state: info about current retry invocation\n\n\nHere's an example with a custom before_sleep function:\n.. testcode::\n\n    import logging\n\n    logging.basicConfig(stream=sys.stderr, level=logging.DEBUG)\n\n    logger = logging.getLogger(__name__)\n\n    def my_before_sleep(retry_state):\n        if retry_state.attempt_number < 1:\n            loglevel = logging.INFO\n        else:\n            loglevel = logging.WARNING\n        logger.log(\n            loglevel, 'Retrying %s: attempt %s ended with: %s',\n            retry_state.fn, retry_state.attempt_number, retry_state.outcome)\n\n    @retry(stop=stop_after_attempt(3), before_sleep=my_before_sleep)\n    def raise_my_exception():\n        raise MyException(\"Fail\")\n\n    try:\n        raise_my_exception()\n    except RetryError:\n        pass\n\n\n\n\nChanging Arguments at Run Time\nYou can change the arguments of a retry decorator as needed when calling it by\nusing the retry_with function attached to the wrapped function:\n.. testcode::\n\n    @retry(stop=stop_after_attempt(3))\n    def raise_my_exception():\n        raise MyException(\"Fail\")\n\n    try:\n        raise_my_exception.retry_with(stop=stop_after_attempt(4))()\n    except Exception:\n        pass\n\n    print(raise_my_exception.retry.statistics)\n\n\n.. testoutput::\n   :hide:\n\n   ...\n\n\nIf you want to use variables to set up the retry parameters, you don't have\nto use the retry decorator - you can instead use Retrying directly:\n.. testcode::\n\n    def never_good_enough(arg1):\n        raise Exception('Invalid argument: {}'.format(arg1))\n\n    def try_never_good_enough(max_attempts=3):\n        retryer = Retrying(stop=stop_after_attempt(max_attempts), reraise=True)\n        retryer(never_good_enough, 'I really do try')\n\n\n\nRetrying code block\nTenacity allows you to retry a code block without the need to wraps it in an\nisolated function. This makes it easy to isolate failing block while sharing\ncontext. The trick is to combine a for loop and a context manager.\n.. testcode::\n\n   from tenacity import Retrying, RetryError, stop_after_attempt\n\n   try:\n       for attempt in Retrying(stop=stop_after_attempt(3)):\n           with attempt:\n               raise Exception('My code is failing!')\n   except RetryError:\n       pass\n\n\nYou can configure every details of retry policy by configuring the Retrying\nobject.\nWith async code you can use AsyncRetrying.\n.. testcode::\n\n   from tenacity import AsyncRetrying, RetryError, stop_after_attempt\n\n   async def function():\n      try:\n          async for attempt in AsyncRetrying(stop=stop_after_attempt(3)):\n              with attempt:\n                  raise Exception('My code is failing!')\n      except RetryError:\n          pass\n\n\nIn both cases, you may want to set the result to the attempt so it's available\nin retry strategies like retry_if_result. This can be done accessing the\nretry_state property:\n.. testcode::\n\n    from tenacity import AsyncRetrying, retry_if_result\n\n    async def function():\n       async for attempt in AsyncRetrying(retry=retry_if_result(lambda x: x < 3)):\n           with attempt:\n               result = 1  # Some complex calculation, function call, etc.\n           if not attempt.retry_state.outcome.failed:\n               attempt.retry_state.set_result(result)\n       return result\n\n\n\nAsync and retry\nFinally, retry works also on asyncio and Tornado (>= 4.5) coroutines.\nSleeps are done asynchronously too.\n@retry\nasync def my_async_function(loop):\n    await loop.getaddrinfo('8.8.8.8', 53)\n@retry\n@tornado.gen.coroutine\ndef my_async_function(http_client, url):\n    yield http_client.fetch(url)\nYou can even use alternative event loops such as curio or Trio by passing the correct sleep function:\n@retry(sleep=trio.sleep)\nasync def my_async_function(loop):\n    await asks.get('https://example.org')\n\nContribute\n\nCheck for open issues or open a fresh issue to start a discussion around a\nfeature idea or a bug.\nFork the repository on GitHub to start making your changes to the\nmain branch (or branch off of it).\nWrite a test which shows that the bug was fixed or that the feature works as\nexpected.\nAdd a changelog\nMake the docs better (or more detailed, or more easier to read, or ...)\n\n\nChangelogs\nreno is used for managing changelogs. Take a look at their usage docs.\nThe doc generation will automatically compile the changelogs. You just need to add them.\n# Opens a template file in an editor\ntox -e reno -- new some-slug-for-my-change --edit\n\n\n", "description": "Implements retry behavior with configurable stop/wait/retry"}, {"name": "tabulate", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npython-tabulate\nInstallation\nBuild status\nLibrary usage\nHeaders\nRow Indices\nTable format\nColumn alignment\nCustom column alignment\nCustom header alignment\nNumber formatting\nText formatting\nWide (fullwidth CJK) symbols\nMultiline cells\nAutomating Multilines\nAdding Separating lines\nANSI support\nUsage of the command line utility\nPerformance considerations\nVersion history\nHow to contribute\nContributors\n\n\n\n\n\nREADME.md\n\n\n\n\npython-tabulate\nPretty-print tabular data in Python, a library and a command-line\nutility.\nThe main use cases of the library are:\n\nprinting small tables without hassle: just one function call,\nformatting is guided by the data itself\nauthoring tabular data for lightweight plain-text markup: multiple\noutput formats suitable for further editing or transformation\nreadable presentation of mixed textual and numeric data: smart\ncolumn alignment, configurable number formatting, alignment by a\ndecimal point\n\nInstallation\nTo install the Python library and the command line utility, run:\npip install tabulate\nThe command line utility will be installed as tabulate to bin on\nLinux (e.g. /usr/bin); or as tabulate.exe to Scripts in your\nPython installation on Windows (e.g. C:\\Python39\\Scripts\\tabulate.exe).\nYou may consider installing the library only for the current user:\npip install tabulate --user\nIn this case the command line utility will be installed to\n~/.local/bin/tabulate on Linux and to\n%APPDATA%\\Python\\Scripts\\tabulate.exe on Windows.\nTo install just the library on Unix-like operating systems:\nTABULATE_INSTALL=lib-only pip install tabulate\nOn Windows:\nset TABULATE_INSTALL=lib-only\npip install tabulate\nBuild status\n \nLibrary usage\nThe module provides just one function, tabulate, which takes a list of\nlists or another tabular data type as the first argument, and outputs a\nnicely formatted plain-text table:\n>>> from tabulate import tabulate\n\n>>> table = [[\"Sun\",696000,1989100000],[\"Earth\",6371,5973.6],\n...          [\"Moon\",1737,73.5],[\"Mars\",3390,641.85]]\n>>> print(tabulate(table))\n-----  ------  -------------\nSun    696000     1.9891e+09\nEarth    6371  5973.6\nMoon     1737    73.5\nMars     3390   641.85\n-----  ------  -------------\nThe following tabular data types are supported:\n\nlist of lists or another iterable of iterables\nlist or another iterable of dicts (keys as columns)\ndict of iterables (keys as columns)\nlist of dataclasses (Python 3.7+ only, field names as columns)\ntwo-dimensional NumPy array\nNumPy record arrays (names as columns)\npandas.DataFrame\n\nTabulate is a Python3 library.\nHeaders\nThe second optional argument named headers defines a list of column\nheaders to be used:\n>>> print(tabulate(table, headers=[\"Planet\",\"R (km)\", \"mass (x 10^29 kg)\"]))\nPlanet      R (km)    mass (x 10^29 kg)\n--------  --------  -------------------\nSun         696000           1.9891e+09\nEarth         6371        5973.6\nMoon          1737          73.5\nMars          3390         641.85\nIf headers=\"firstrow\", then the first row of data is used:\n>>> print(tabulate([[\"Name\",\"Age\"],[\"Alice\",24],[\"Bob\",19]],\n...                headers=\"firstrow\"))\nName      Age\n------  -----\nAlice      24\nBob        19\nIf headers=\"keys\", then the keys of a dictionary/dataframe, or column\nindices are used. It also works for NumPy record arrays and lists of\ndictionaries or named tuples:\n>>> print(tabulate({\"Name\": [\"Alice\", \"Bob\"],\n...                 \"Age\": [24, 19]}, headers=\"keys\"))\n  Age  Name\n-----  ------\n   24  Alice\n   19  Bob\nRow Indices\nBy default, only pandas.DataFrame tables have an additional column\ncalled row index. To add a similar column to any other type of table,\npass showindex=\"always\" or showindex=True argument to tabulate().\nTo suppress row indices for all types of data, pass showindex=\"never\"\nor showindex=False. To add a custom row index column, pass\nshowindex=rowIDs, where rowIDs is some iterable:\n>>> print(tabulate([[\"F\",24],[\"M\",19]], showindex=\"always\"))\n-  -  --\n0  F  24\n1  M  19\n-  -  --\nTable format\nThere is more than one way to format a table in plain text. The third\noptional argument named tablefmt defines how the table is formatted.\nSupported table formats are:\n\n\"plain\"\n\"simple\"\n\"github\"\n\"grid\"\n\"simple_grid\"\n\"rounded_grid\"\n\"heavy_grid\"\n\"mixed_grid\"\n\"double_grid\"\n\"fancy_grid\"\n\"outline\"\n\"simple_outline\"\n\"rounded_outline\"\n\"heavy_outline\"\n\"mixed_outline\"\n\"double_outline\"\n\"fancy_outline\"\n\"pipe\"\n\"orgtbl\"\n\"asciidoc\"\n\"jira\"\n\"presto\"\n\"pretty\"\n\"psql\"\n\"rst\"\n\"mediawiki\"\n\"moinmoin\"\n\"youtrack\"\n\"html\"\n\"unsafehtml\"\n\"latex\"\n\"latex_raw\"\n\"latex_booktabs\"\n\"latex_longtable\"\n\"textile\"\n\"tsv\"\n\nplain tables do not use any pseudo-graphics to draw lines:\n>>> table = [[\"spam\",42],[\"eggs\",451],[\"bacon\",0]]\n>>> headers = [\"item\", \"qty\"]\n>>> print(tabulate(table, headers, tablefmt=\"plain\"))\nitem      qty\nspam       42\neggs      451\nbacon       0\nsimple is the default format (the default may change in future\nversions). It corresponds to simple_tables in Pandoc Markdown\nextensions:\n>>> print(tabulate(table, headers, tablefmt=\"simple\"))\nitem      qty\n------  -----\nspam       42\neggs      451\nbacon       0\ngithub follows the conventions of GitHub flavored Markdown. It\ncorresponds to the pipe format without alignment colons:\n>>> print(tabulate(table, headers, tablefmt=\"github\"))\n| item   | qty   |\n|--------|-------|\n| spam   | 42    |\n| eggs   | 451   |\n| bacon  | 0     |\ngrid is like tables formatted by Emacs'\ntable.el package. It corresponds to\ngrid_tables in Pandoc Markdown extensions:\n>>> print(tabulate(table, headers, tablefmt=\"grid\"))\n+--------+-------+\n| item   |   qty |\n+========+=======+\n| spam   |    42 |\n+--------+-------+\n| eggs   |   451 |\n+--------+-------+\n| bacon  |     0 |\n+--------+-------+\nsimple_grid draws a grid using single-line box-drawing characters:\n>>> print(tabulate(table, headers, tablefmt=\"simple_grid\"))\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 item   \u2502   qty \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 spam   \u2502    42 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 eggs   \u2502   451 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 bacon  \u2502     0 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\nrounded_grid draws a grid using single-line box-drawing characters with rounded corners:\n>>> print(tabulate(table, headers, tablefmt=\"rounded_grid\"))\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 item   \u2502   qty \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 spam   \u2502    42 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 eggs   \u2502   451 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 bacon  \u2502     0 \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\nheavy_grid draws a grid using bold (thick) single-line box-drawing characters:\n>>> print(tabulate(table, headers, tablefmt=\"heavy_grid\"))\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 item   \u2503   qty \u2503\n\u2523\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u254b\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u252b\n\u2503 spam   \u2503    42 \u2503\n\u2523\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u254b\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u252b\n\u2503 eggs   \u2503   451 \u2503\n\u2523\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u254b\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u252b\n\u2503 bacon  \u2503     0 \u2503\n\u2517\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u253b\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u251b\n\nmixed_grid draws a grid using a mix of light (thin) and heavy (thick) lines box-drawing characters:\n>>> print(tabulate(table, headers, tablefmt=\"mixed_grid\"))\n\u250d\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u252f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2511\n\u2502 item   \u2502   qty \u2502\n\u251d\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u253f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2525\n\u2502 spam   \u2502    42 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 eggs   \u2502   451 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 bacon  \u2502     0 \u2502\n\u2515\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2537\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2519\n\ndouble_grid draws a grid using double-line box-drawing characters:\n>>> print(tabulate(table, headers, tablefmt=\"double_grid\"))\n\u2554\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2566\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2557\n\u2551 item   \u2551   qty \u2551\n\u2560\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256c\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2563\n\u2551 spam   \u2551    42 \u2551\n\u2560\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256c\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2563\n\u2551 eggs   \u2551   451 \u2551\n\u2560\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256c\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2563\n\u2551 bacon  \u2551     0 \u2551\n\u255a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2569\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255d\n\nfancy_grid draws a grid using a mix of single and\ndouble-line box-drawing characters:\n>>> print(tabulate(table, headers, tablefmt=\"fancy_grid\"))\n\u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n\u2502 item   \u2502   qty \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 spam   \u2502    42 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 eggs   \u2502   451 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 bacon  \u2502     0 \u2502\n\u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\noutline is the same as the grid format but doesn't draw lines between rows:\n>>> print(tabulate(table, headers, tablefmt=\"outline\"))\n+--------+-------+\n| item   |   qty |\n+========+=======+\n| spam   |    42 |\n| eggs   |   451 |\n| bacon  |     0 |\n+--------+-------+\n\nsimple_outline is the same as the simple_grid format but doesn't draw lines between rows:\n>>> print(tabulate(table, headers, tablefmt=\"simple_outline\"))\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 item   \u2502   qty \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 spam   \u2502    42 \u2502\n\u2502 eggs   \u2502   451 \u2502\n\u2502 bacon  \u2502     0 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\nrounded_outline is the same as the rounded_grid format but doesn't draw lines between rows:\n>>> print(tabulate(table, headers, tablefmt=\"rounded_outline\"))\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 item   \u2502   qty \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 spam   \u2502    42 \u2502\n\u2502 eggs   \u2502   451 \u2502\n\u2502 bacon  \u2502     0 \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\nheavy_outline is the same as the heavy_grid format but doesn't draw lines between rows:\n>>> print(tabulate(table, headers, tablefmt=\"heavy_outline\"))\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 item   \u2503   qty \u2503\n\u2523\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u254b\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u252b\n\u2503 spam   \u2503    42 \u2503\n\u2503 eggs   \u2503   451 \u2503\n\u2503 bacon  \u2503     0 \u2503\n\u2517\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u253b\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u251b\n\nmixed_outline is the same as the mixed_grid format but doesn't draw lines between rows:\n>>> print(tabulate(table, headers, tablefmt=\"mixed_outline\"))\n\u250d\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u252f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2511\n\u2502 item   \u2502   qty \u2502\n\u251d\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u253f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2525\n\u2502 spam   \u2502    42 \u2502\n\u2502 eggs   \u2502   451 \u2502\n\u2502 bacon  \u2502     0 \u2502\n\u2515\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2537\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2519\n\ndouble_outline is the same as the double_grid format but doesn't draw lines between rows:\n>>> print(tabulate(table, headers, tablefmt=\"double_outline\"))\n\u2554\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2566\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2557\n\u2551 item   \u2551   qty \u2551\n\u2560\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256c\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2563\n\u2551 spam   \u2551    42 \u2551\n\u2551 eggs   \u2551   451 \u2551\n\u2551 bacon  \u2551     0 \u2551\n\u255a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2569\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255d\n\nfancy_outline is the same as the fancy_grid format but doesn't draw lines between rows:\n>>> print(tabulate(table, headers, tablefmt=\"fancy_outline\"))\n\u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n\u2502 item   \u2502   qty \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 spam   \u2502    42 \u2502\n\u2502 eggs   \u2502   451 \u2502\n\u2502 bacon  \u2502     0 \u2502\n\u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\n\npresto is like tables formatted by Presto cli:\n>>> print(tabulate(table, headers, tablefmt=\"presto\"))\n item   |   qty\n--------+-------\n spam   |    42\n eggs   |   451\n bacon  |     0\npretty attempts to be close to the format emitted by the PrettyTables\nlibrary:\n>>> print(tabulate(table, headers, tablefmt=\"pretty\"))\n+-------+-----+\n| item  | qty |\n+-------+-----+\n| spam  | 42  |\n| eggs  | 451 |\n| bacon |  0  |\n+-------+-----+\npsql is like tables formatted by Postgres' psql cli:\n>>> print(tabulate(table, headers, tablefmt=\"psql\"))\n+--------+-------+\n| item   |   qty |\n|--------+-------|\n| spam   |    42 |\n| eggs   |   451 |\n| bacon  |     0 |\n+--------+-------+\npipe follows the conventions of PHP Markdown\nExtra extension.\nIt corresponds to pipe_tables in Pandoc. This format uses colons to\nindicate column alignment:\n>>> print(tabulate(table, headers, tablefmt=\"pipe\"))\n| item   |   qty |\n|:-------|------:|\n| spam   |    42 |\n| eggs   |   451 |\n| bacon  |     0 |\nasciidoc formats data like a simple table of the\nAsciiDoctor\nformat:\n>>> print(tabulate(table, headers, tablefmt=\"asciidoc\"))\n[cols=\"8<,7>\",options=\"header\"]\n|====\n| item   |   qty\n| spam   |    42\n| eggs   |   451\n| bacon  |     0\n|====\norgtbl follows the conventions of Emacs\norg-mode, and is editable also\nin the minor orgtbl-mode. Hence its name:\n>>> print(tabulate(table, headers, tablefmt=\"orgtbl\"))\n| item   |   qty |\n|--------+-------|\n| spam   |    42 |\n| eggs   |   451 |\n| bacon  |     0 |\njira follows the conventions of Atlassian Jira markup language:\n>>> print(tabulate(table, headers, tablefmt=\"jira\"))\n|| item   ||   qty ||\n| spam   |    42 |\n| eggs   |   451 |\n| bacon  |     0 |\nrst formats data like a simple table of the\nreStructuredText\nformat:\n>>> print(tabulate(table, headers, tablefmt=\"rst\"))\n======  =====\nitem      qty\n======  =====\nspam       42\neggs      451\nbacon       0\n======  =====\nmediawiki format produces a table markup used in\nWikipedia and on other\nMediaWiki-based sites:\n>>> print(tabulate(table, headers, tablefmt=\"mediawiki\"))\n{| class=\"wikitable\" style=\"text-align: left;\"\n|+ <!-- caption -->\n|-\n! item   !! style=\"text-align: right;\"|   qty\n|-\n| spam   || style=\"text-align: right;\"|    42\n|-\n| eggs   || style=\"text-align: right;\"|   451\n|-\n| bacon  || style=\"text-align: right;\"|     0\n|}\nmoinmoin format produces a table markup used in\nMoinMoin wikis:\n>>> print(tabulate(table, headers, tablefmt=\"moinmoin\"))\n|| ''' item   ''' || ''' quantity   ''' ||\n||  spam    ||  41.999      ||\n||  eggs    ||  451         ||\n||  bacon   ||              ||\nyoutrack format produces a table markup used in Youtrack tickets:\n>>> print(tabulate(table, headers, tablefmt=\"youtrack\"))\n||  item    ||  quantity   ||\n|   spam    |  41.999      |\n|   eggs    |  451         |\n|   bacon   |              |\ntextile format produces a table markup used in\nTextile format:\n>>> print(tabulate(table, headers, tablefmt=\"textile\"))\n|_.  item   |_.   qty |\n|<. spam    |>.    42 |\n|<. eggs    |>.   451 |\n|<. bacon   |>.     0 |\nhtml produces standard HTML markup as an html.escape'd str\nwith a .repr_html method so that Jupyter Lab and Notebook display the HTML\nand a .str property so that the raw HTML remains accessible.\nunsafehtml table format can be used if an unescaped HTML is required:\n>>> print(tabulate(table, headers, tablefmt=\"html\"))\n<table>\n<tbody>\n<tr><th>item  </th><th style=\"text-align: right;\">  qty</th></tr>\n<tr><td>spam  </td><td style=\"text-align: right;\">   42</td></tr>\n<tr><td>eggs  </td><td style=\"text-align: right;\">  451</td></tr>\n<tr><td>bacon </td><td style=\"text-align: right;\">    0</td></tr>\n</tbody>\n</table>\nlatex format creates a tabular environment for LaTeX markup,\nreplacing special characters like _ or \\ to their LaTeX\ncorrespondents:\n>>> print(tabulate(table, headers, tablefmt=\"latex\"))\n\\begin{tabular}{lr}\n\\hline\n item   &   qty \\\\\n\\hline\n spam   &    42 \\\\\n eggs   &   451 \\\\\n bacon  &     0 \\\\\n\\hline\n\\end{tabular}\nlatex_raw behaves like latex but does not escape LaTeX commands and\nspecial characters.\nlatex_booktabs creates a tabular environment for LaTeX markup using\nspacing and style from the booktabs package.\nlatex_longtable creates a table that can stretch along multiple pages,\nusing the longtable package.\nColumn alignment\ntabulate is smart about column alignment. It detects columns which\ncontain only numbers, and aligns them by a decimal point (or flushes\nthem to the right if they appear to be integers). Text columns are\nflushed to the left.\nYou can override the default alignment with numalign and stralign\nnamed arguments. Possible column alignments are: right, center,\nleft, decimal (only for numbers), and None (to disable alignment).\nAligning by a decimal point works best when you need to compare numbers\nat a glance:\n>>> print(tabulate([[1.2345],[123.45],[12.345],[12345],[1234.5]]))\n----------\n    1.2345\n  123.45\n   12.345\n12345\n 1234.5\n----------\nCompare this with a more common right alignment:\n>>> print(tabulate([[1.2345],[123.45],[12.345],[12345],[1234.5]], numalign=\"right\"))\n------\n1.2345\n123.45\n12.345\n 12345\n1234.5\n------\nFor tabulate, anything which can be parsed as a number is a number.\nEven numbers represented as strings are aligned properly. This feature\ncomes in handy when reading a mixed table of text and numbers from a\nfile:\n>>> import csv ; from StringIO import StringIO\n>>> table = list(csv.reader(StringIO(\"spam, 42\\neggs, 451\\n\")))\n>>> table\n[['spam', ' 42'], ['eggs', ' 451']]\n>>> print(tabulate(table))\n----  ----\nspam    42\neggs   451\n----  ----\nTo disable this feature use disable_numparse=True.\n>>> print(tabulate.tabulate([[\"Ver1\", \"18.0\"], [\"Ver2\",\"19.2\"]], tablefmt=\"simple\", disable_numparse=True))\n----  ----\nVer1  18.0\nVer2  19.2\n----  ----\nCustom column alignment\ntabulate allows a custom column alignment to override the smart alignment described above.\nUse colglobalalign to define a global setting. Possible alignments are: right, center, left, decimal (only for numbers).\nFurthermore, you can define colalign for column-specific alignment as a list or a tuple. Possible values are global (keeps global setting), right, center, left, decimal (only for numbers), None (to disable alignment). Missing alignments are treated as global.\n>>> print(tabulate([[1,2,3,4],[111,222,333,444]], colglobalalign='center', colalign = ('global','left','right')))\n---  ---  ---  ---\n 1   2      3   4\n111  222  333  444\n---  ---  ---  ---\nCustom header alignment\nHeaders' alignment can be defined separately from columns'. Like for columns, you can use:\n\nheadersglobalalign to define a header-specific global alignment setting. Possible values are right, center, left, None (to follow column alignment),\nheadersalign list or tuple to further specify header-wise alignment. Possible values are global (keeps global setting), same (follow column alignment), right, center, left, None (to disable alignment). Missing alignments are treated as global.\n\n>>> print(tabulate([[1,2,3,4,5,6],[111,222,333,444,555,666]], colglobalalign = 'center', colalign = ('left',), headers = ['h','e','a','d','e','r'], headersglobalalign = 'right', headersalign = ('same','same','left','global','center')))\n\nh     e   a      d   e     r\n---  ---  ---  ---  ---  ---\n1     2    3    4    5    6\n111  222  333  444  555  666\nNumber formatting\ntabulate allows to define custom number formatting applied to all\ncolumns of decimal numbers. Use floatfmt named argument:\n>>> print(tabulate([[\"pi\",3.141593],[\"e\",2.718282]], floatfmt=\".4f\"))\n--  ------\npi  3.1416\ne   2.7183\n--  ------\nfloatfmt argument can be a list or a tuple of format strings, one per\ncolumn, in which case every column may have different number formatting:\n>>> print(tabulate([[0.12345, 0.12345, 0.12345]], floatfmt=(\".1f\", \".3f\")))\n---  -----  -------\n0.1  0.123  0.12345\n---  -----  -------\nintfmt works similarly for integers\n>>> print(tabulate([[\"a\",1000],[\"b\",90000]], intfmt=\",\"))\n-  ------\na   1,000\nb  90,000\n-  ------\n\nText formatting\nBy default, tabulate removes leading and trailing whitespace from text\ncolumns. To disable whitespace removal, set the global module-level flag\nPRESERVE_WHITESPACE:\nimport tabulate\ntabulate.PRESERVE_WHITESPACE = True\nWide (fullwidth CJK) symbols\nTo properly align tables which contain wide characters (typically\nfullwidth glyphs from Chinese, Japanese or Korean languages), the user\nshould install wcwidth library. To install it together with\ntabulate:\npip install tabulate[widechars]\nWide character support is enabled automatically if wcwidth library is\nalready installed. To disable wide characters support without\nuninstalling wcwidth, set the global module-level flag\nWIDE_CHARS_MODE:\nimport tabulate\ntabulate.WIDE_CHARS_MODE = False\nMultiline cells\nMost table formats support multiline cell text (text containing newline\ncharacters). The newline characters are honored as line break\ncharacters.\nMultiline cells are supported for data rows and for header rows.\nFurther automatic line breaks are not inserted. Of course, some output\nformats such as latex or html handle automatic formatting of the cell\ncontent on their own, but for those that don't, the newline characters\nin the input cell text are the only means to break a line in cell text.\nNote that some output formats (e.g. simple, or plain) do not represent\nrow delimiters, so that the representation of multiline cells in such\nformats may be ambiguous to the reader.\nThe following examples of formatted output use the following table with\na multiline cell, and headers with a multiline cell:\n>>> table = [[\"eggs\",451],[\"more\\nspam\",42]]\n>>> headers = [\"item\\nname\", \"qty\"]\nplain tables:\n>>> print(tabulate(table, headers, tablefmt=\"plain\"))\nitem      qty\nname\neggs      451\nmore       42\nspam\nsimple tables:\n>>> print(tabulate(table, headers, tablefmt=\"simple\"))\nitem      qty\nname\n------  -----\neggs      451\nmore       42\nspam\ngrid tables:\n>>> print(tabulate(table, headers, tablefmt=\"grid\"))\n+--------+-------+\n| item   |   qty |\n| name   |       |\n+========+=======+\n| eggs   |   451 |\n+--------+-------+\n| more   |    42 |\n| spam   |       |\n+--------+-------+\nfancy_grid tables:\n>>> print(tabulate(table, headers, tablefmt=\"fancy_grid\"))\n\u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n\u2502 item   \u2502   qty \u2502\n\u2502 name   \u2502       \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 eggs   \u2502   451 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 more   \u2502    42 \u2502\n\u2502 spam   \u2502       \u2502\n\u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\npipe tables:\n>>> print(tabulate(table, headers, tablefmt=\"pipe\"))\n| item   |   qty |\n| name   |       |\n|:-------|------:|\n| eggs   |   451 |\n| more   |    42 |\n| spam   |       |\norgtbl tables:\n>>> print(tabulate(table, headers, tablefmt=\"orgtbl\"))\n| item   |   qty |\n| name   |       |\n|--------+-------|\n| eggs   |   451 |\n| more   |    42 |\n| spam   |       |\njira tables:\n>>> print(tabulate(table, headers, tablefmt=\"jira\"))\n| item   |   qty |\n| name   |       |\n|:-------|------:|\n| eggs   |   451 |\n| more   |    42 |\n| spam   |       |\npresto tables:\n>>> print(tabulate(table, headers, tablefmt=\"presto\"))\n item   |   qty\n name   |\n--------+-------\n eggs   |   451\n more   |    42\n spam   |\npretty tables:\n>>> print(tabulate(table, headers, tablefmt=\"pretty\"))\n+------+-----+\n| item | qty |\n| name |     |\n+------+-----+\n| eggs | 451 |\n| more | 42  |\n| spam |     |\n+------+-----+\npsql tables:\n>>> print(tabulate(table, headers, tablefmt=\"psql\"))\n+--------+-------+\n| item   |   qty |\n| name   |       |\n|--------+-------|\n| eggs   |   451 |\n| more   |    42 |\n| spam   |       |\n+--------+-------+\nrst tables:\n>>> print(tabulate(table, headers, tablefmt=\"rst\"))\n======  =====\nitem      qty\nname\n======  =====\neggs      451\nmore       42\nspam\n======  =====\nMultiline cells are not well-supported for the other table formats.\nAutomating Multilines\nWhile tabulate supports data passed in with multilines entries explicitly provided,\nit also provides some support to help manage this work internally.\nThe maxcolwidths argument is a list where each entry specifies the max width for\nit's respective column. Any cell that will exceed this will automatically wrap the content.\nTo assign the same max width for all columns, a singular int scaler can be used.\nUse None for any columns where an explicit maximum does not need to be provided,\nand thus no automate multiline wrapping will take place.\nThe wrapping uses the python standard textwrap.wrap\nfunction with default parameters - aside from width.\nThis example demonstrates usage of automatic multiline wrapping, though typically\nthe lines being wrapped would probably be significantly longer than this.\n>>> print(tabulate([[\"John Smith\", \"Middle Manager\"]], headers=[\"Name\", \"Title\"], tablefmt=\"grid\", maxcolwidths=[None, 8]))\n+------------+---------+\n| Name       | Title   |\n+============+=========+\n| John Smith | Middle  |\n|            | Manager |\n+------------+---------+\nAdding Separating lines\nOne might want to add one or more separating lines to highlight different sections in a table.\nThe separating lines will be of the same type as the one defined by the specified formatter as either the\nlinebetweenrows, linebelowheader, linebelow, lineabove or just a simple empty line when none is defined for the formatter\n>>> from tabulate import tabulate, SEPARATING_LINE\n\ntable = [[\"Earth\",6371],\n         [\"Mars\",3390],\n         SEPARATING_LINE,\n         [\"Moon\",1737]]\nprint(tabulate(table, tablefmt=\"simple\"))\n-----  ----\nEarth  6371\nMars   3390\n-----  ----\nMoon   1737\n-----  ----\n\nANSI support\nANSI escape codes are non-printable byte sequences usually used for terminal operations like setting\ncolor output or modifying cursor positions. Because multi-byte ANSI sequences are inherently non-printable,\nthey can still introduce unwanted extra length to strings. For example:\n>>> len('\\033[31mthis text is red\\033[0m')  # printable length is 16\n25\n\nTo deal with this, string lengths are calculated after first removing all ANSI escape sequences. This ensures\nthat the actual printable length is used for column widths, rather than the byte length. In the final, printable\ntable, however, ANSI escape sequences are not removed so the original styling is preserved.\nSome terminals support a special grouping of ANSI escape sequences that are intended to display hyperlinks\nmuch in the same way they are shown in browsers. These are handled just as mentioned before: non-printable\nANSI escape sequences are removed prior to string length calculation. The only diifference with escaped\nhyperlinks is that column width will be based on the length of the URL text rather than the URL\nitself (terminals would show this text). For example:\n>>> len('\\x1b]8;;https://example.com\\x1b\\\\example\\x1b]8;;\\x1b\\\\')  # display length is 7, showing 'example'\n45\n\nUsage of the command line utility\nUsage: tabulate [options] [FILE ...]\n\nFILE                      a filename of the file with tabular data;\n                          if \"-\" or missing, read data from stdin.\n\nOptions:\n\n-h, --help                show this message\n-1, --header              use the first row of data as a table header\n-o FILE, --output FILE    print table to FILE (default: stdout)\n-s REGEXP, --sep REGEXP   use a custom column separator (default: whitespace)\n-F FPFMT, --float FPFMT   floating point number format (default: g)\n-I INTFMT, --int INTFMT   integer point number format (default: \"\")\n-f FMT, --format FMT      set output table format; supported formats:\n                          plain, simple, github, grid, fancy_grid, pipe,\n                          orgtbl, rst, mediawiki, html, latex, latex_raw,\n                          latex_booktabs, latex_longtable, tsv\n                          (default: simple)\n\nPerformance considerations\nSuch features as decimal point alignment and trying to parse everything\nas a number imply that tabulate:\n\nhas to \"guess\" how to print a particular tabular data type\nneeds to keep the entire table in-memory\nhas to \"transpose\" the table twice\ndoes much more work than it may appear\n\nIt may not be suitable for serializing really big tables (but who's\ngoing to do that, anyway?) or printing tables in performance sensitive\napplications. tabulate is about two orders of magnitude slower than\nsimply joining lists of values with a tab, comma, or other separator.\nAt the same time, tabulate is comparable to other table\npretty-printers. Given a 10x10 table (a list of lists) of mixed text and\nnumeric data, tabulate appears to be slower than asciitable, and\nfaster than PrettyTable and texttable The following mini-benchmark\nwas run in Python 3.9.13 on Windows 10:\n=================================  ==========  ===========\nTable formatter                      time, \u03bcs    rel. time\n=================================  ==========  ===========\ncsv to StringIO                          12.5          1.0\njoin with tabs and newlines              14.6          1.2\nasciitable (0.8.0)                      192.0         15.4\ntabulate (0.9.0)                        483.5         38.7\ntabulate (0.9.0, WIDE_CHARS_MODE)       637.6         51.1\nPrettyTable (3.4.1)                    1080.6         86.6\ntexttable (1.6.4)                      1390.3        111.4\n=================================  ==========  ===========\n\nVersion history\nThe full version history can be found at the changelog.\nHow to contribute\nContributions should include tests and an explanation for the changes\nthey propose. Documentation (examples, docstrings, README.md) should be\nupdated accordingly.\nThis project uses pytest testing\nframework and tox to automate testing in\ndifferent environments. Add tests to one of the files in the test/\nfolder.\nTo run tests on all supported Python versions, make sure all Python\ninterpreters, pytest and tox are installed, then run tox in the root\nof the project source tree.\nOn Linux tox expects to find executables like python3.7, python3.8 etc.\nOn Windows it looks for C:\\Python37\\python.exe, C:\\Python38\\python.exe etc. respectively.\nOne way to install all the required versions of the Python interpreter is to use pyenv.\nAll versions can then be easily installed with something like:\n pyenv install 3.7.12\n pyenv install 3.8.12\n ...\n\nDon't forget to change your PATH so that tox knows how to find all the installed versions. Something like\n export PATH=\"${PATH}:${HOME}/.pyenv/shims\"\n\nTo test only some Python environments, use -e option. For example, to\ntest only against Python 3.7 and Python 3.10, run:\ntox -e py37,py310\nin the root of the project source tree.\nTo enable NumPy and Pandas tests, run:\ntox -e py37-extra,py310-extra\n(this may take a long time the first time, because NumPy and Pandas will\nhave to be installed in the new virtual environments)\nTo fix code formatting:\ntox -e lint\nSee tox.ini file to learn how to use to test\nindividual Python versions.\nContributors\nSergey Astanin, Pau Tallada Cresp\u00ed, Erwin Marsi, Mik Kocikowski, Bill\nRyder, Zach Dwiel, Frederik Rietdijk, Philipp Bogensberger, Greg\n(anonymous), Stefan Tatschner, Emiel van Miltenburg, Brandon Bennett,\nAmjith Ramanujam, Jan Schulz, Simon Percivall, Javier Santacruz\nL\u00f3pez-Cepero, Sam Denton, Alexey Ziyangirov, acaird, Cesar Sanchez,\nnaught101, John Vandenberg, Zack Dever, Christian Clauss, Benjamin\nMaier, Andy MacKinlay, Thomas Roten, Jue Wang, Joe King, Samuel Phan,\nNick Satterly, Daniel Robbins, Dmitry B, Lars Butler, Andreas Maier,\nDick Marinus, S\u00e9bastien Celles, Yago Gonz\u00e1lez, Andrew Gaul, Wim Glenn,\nJean Michel Rouly, Tim Gates, John Vandenberg, Sorin Sbarnea,\nWes Turner, Andrew Tija, Marco Gorelli, Sean McGinnis, danja100,\nendolith, Dominic Davis-Foster, pavlocat, Daniel Aslau, paulc,\nFelix Yan, Shane Loretz, Frank Busse, Harsh Singh, Derek Weitzel,\nVladimir Vrzi\u0107, \uc11c\uc2b9\uc6b0 (chrd5273), Georgy Frolov, Christian Cwienk,\nBart Broere, Vilhelm Prytz, Alexander Ga\u017eo, Hugo van Kemenade,\njamescooke, Matt Warner, J\u00e9r\u00f4me Provensal, Kevin Deldycke,\nKian-Meng Ang, Kevin Patterson, Shodhan Save, cleoold, KOLANICH,\nVijaya Krishna Kasula, Furcy Pin, Christian Fibich, Shaun Duncan,\nDimitri Papadopoulos, \u00c9lie Goudout.\n\n\n", "description": "Pretty-print tabular data in Python."}, {"name": "tabula", "readme": "\n\n\n\n\n\n\n\n\n\n\n\ntabula\nDependencies\n\n\n\n\n\nREADME.md\n\n\n\n\ntabula\nAscii table\nDependencies\npip install numpy\n\n\n\n", "description": "Creates ASCII tables"}, {"name": "tables", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPyTables: hierarchical datasets in Python\nState-of-the-art compression\nNot a RDBMS replacement\nTables\nArrays\nEasy to use\nPlatforms\nCompiling\nInstallation\n\n\n\n\n\nREADME.rst\n\n\n\n\nPyTables: hierarchical datasets in Python\n\n\n\n\n\n\n\n\n\n\n\nURL:http://www.pytables.org/\n\n\n\nPyTables is a package for managing hierarchical datasets and designed\nto efficiently cope with extremely large amounts of data.\nIt is built on top of the HDF5 library and the NumPy package. It\nfeatures an object-oriented interface that, combined with C extensions\nfor the performance-critical parts of the code (generated using\nCython), makes it a fast, yet extremely easy to use tool for\ninteractively save and retrieve very large amounts of data. One\nimportant feature of PyTables is that it optimizes memory and disk\nresources so that they take much less space (between a factor 3 to 5,\nand more if the data is compressible) than other solutions, like for\nexample, relational or object oriented databases.\n\nState-of-the-art compression\nPyTables comes with out-of-box support for the Blosc compressor.  This allows for extremely high compression\nspeed, while keeping decent compression ratios.  By doing so, I/O can\nbe accelerated by a large extent, and you may end achieving higher\nperformance than the bandwidth provided by your I/O subsystem.  See\nthe Tuning The Chunksize section of the Optimization Tips chapter\nof user documentation for some benchmarks.\n\nNot a RDBMS replacement\nPyTables is not designed to work as a relational database replacement,\nbut rather as a teammate. If you want to work with large datasets of\nmultidimensional data (for example, for multidimensional analysis), or\njust provide a categorized structure for some portions of your\ncluttered RDBS, then give PyTables a try. It works well for storing\ndata from data acquisition systems (DAS), simulation software, network\ndata monitoring systems (for example, traffic measurements of IP\npackets on routers), or as a centralized repository for system logs,\nto name only a few possible uses.\n\nTables\nA table is defined as a collection of records whose values are stored\nin fixed-length fields. All records have the same structure and all\nvalues in each field have the same data type. The terms \"fixed-length\"\nand strict \"data types\" seems to be quite a strange requirement for an\ninterpreted language like Python, but they serve a useful function if\nthe goal is to save very large quantities of data (such as is\ngenerated by many scientific applications, for example) in an\nefficient manner that reduces demand on CPU time and I/O.\n\nArrays\nThere are other useful objects like arrays, enlargeable arrays or\nvariable length arrays that can cope with different missions on your\nproject.\n\nEasy to use\nOne of the principal objectives of PyTables is to be user-friendly.\nIn addition, many different iterators have been implemented so as to\nenable the interactive work to be as productive as possible.\n\nPlatforms\nWe are using Linux on top of Intel32 and Intel64 boxes as the main\ndevelopment platforms, but PyTables should be easy to compile/install\non other UNIX or Windows machines.\n\nCompiling\nTo compile PyTables you will need, at least, a recent version of HDF5\n(C flavor) library, the Zlib compression library and the NumPy and\nNumexpr packages. Besides, it comes with support for the Blosc, LZO\nand bzip2 compressor libraries. Blosc is mandatory, but PyTables comes\nwith Blosc sources so, although it is recommended to have Blosc\ninstalled in your system, you don't absolutely need to install it\nseparately.  LZO and bzip2 compression libraries are, however,\noptional.\n\nInstallation\n\nMake sure you have HDF5 version 1.10.5 or above.\nOn OSX you can install HDF5 using Homebrew:\n$ brew install hdf5\n\nOn debian bases distributions:\n$ sudo apt-get install libhdf5-serial-dev\n\nIf you have the HDF5 library in some non-standard location (that\nis, where the compiler and the linker can't find it) you can use\nthe environment variable HDF5_DIR to specify its location. See\nthe manual for more\ndetails.\n\n\n\nFor stability (and performance too) reasons, it is strongly\nrecommended that you install the C-Blosc library separately,\nalthough you might want PyTables to use its internal C-Blosc\nsources.\n\n\nOptionally, consider to install the LZO compression library and/or\nthe bzip2 compression library.\n\nInstall!:\n$ python3 -m pip install tables\n\n\nTo run the test suite run:\n$ python3 -m tables.tests.test_all\n\nIf there is some test that does not pass, please send the\ncomplete output for tests back to us.\n\n\nEnjoy data! -- The PyTables Team\n\n\n", "description": "Hierarchical datasets in Python."}, {"name": "sympy", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSymPy\nDownload\nDocumentation and Usage\nInstallation\nContributing\nTests\nRegenerate Experimental LaTeX Parser/Lexer\nClean\nBugs\nBrief History\nCitation\n\n\n\n\n\nREADME.md\n\n\n\n\nSymPy\n\n\n\n\n\n\n\n\n\nSee the AUTHORS file for the list of authors.\nAnd many more people helped on the SymPy mailing list, reported bugs,\nhelped organize SymPy's participation in the Google Summer of Code, the\nGoogle Highly Open Participation Contest, Google Code-In, wrote and\nblogged about SymPy...\nLicense: New BSD License (see the LICENSE file for details) covers all\nfiles in the sympy repository unless stated otherwise.\nOur mailing list is at\nhttps://groups.google.com/forum/?fromgroups#!forum/sympy.\nWe have a community chat at Gitter. Feel\nfree to ask us anything there. We have a very welcoming and helpful\ncommunity.\nDownload\nThe recommended installation method is through Anaconda,\nhttps://www.anaconda.com/products/distribution\nYou can also get the latest version of SymPy from\nhttps://pypi.python.org/pypi/sympy/\nTo get the git version do\n$ git clone https://github.com/sympy/sympy.git\n\nFor other options (tarballs, debs, etc.), see\nhttps://docs.sympy.org/dev/install.html.\nDocumentation and Usage\nFor in-depth instructions on installation and building the\ndocumentation, see the SymPy Documentation Style Guide.\nEverything is at:\nhttps://docs.sympy.org/\nYou can generate everything at the above site in your local copy of\nSymPy by:\n$ cd doc\n$ make html\n\nThen the docs will be in _build/html. If\nyou don't want to read that, here is a short usage:\nFrom this directory, start Python and:\n>>> from sympy import Symbol, cos\n>>> x = Symbol('x')\n>>> e = 1/cos(x)\n>>> print(e.series(x, 0, 10))\n1 + x**2/2 + 5*x**4/24 + 61*x**6/720 + 277*x**8/8064 + O(x**10)\nSymPy also comes with a console that is a simple wrapper around the\nclassic python console (or IPython when available) that loads the SymPy\nnamespace and executes some common commands for you.\nTo start it, issue:\n$ bin/isympy\n\nfrom this directory, if SymPy is not installed or simply:\n$ isympy\n\nif SymPy is installed.\nInstallation\nSymPy has a hard dependency on the mpmath library\n(version >= 0.19). You should install it first, please refer to the\nmpmath installation guide:\nhttps://github.com/fredrik-johansson/mpmath#1-download--installation\nTo install SymPy using PyPI, run the following command:\n$ pip install sympy\n\nTo install SymPy using Anaconda, run the following command:\n$ conda install -c anaconda sympy\n\nTo install SymPy from GitHub source, first clone SymPy using git:\n$ git clone https://github.com/sympy/sympy.git\n\nThen, in the sympy repository that you cloned, simply run:\n$ pip install .\n\nSee https://docs.sympy.org/dev/install.html for more information.\nContributing\nWe welcome contributions from anyone, even if you are new to open\nsource. Please read our Introduction to Contributing\npage and the SymPy Documentation Style Guide. If you\nare new and looking for some way to contribute, a good place to start is\nto look at the issues tagged Easy to Fix.\nPlease note that all participants in this project are expected to follow\nour Code of Conduct. By participating in this project you agree to abide\nby its terms. See CODE_OF_CONDUCT.md.\nTests\nTo execute all tests, run:\n$./setup.py test\n\nin the current directory.\nFor the more fine-grained running of tests or doctests, use bin/test\nor respectively bin/doctest. The master branch is automatically tested\nby GitHub Actions.\nTo test pull requests, use\nsympy-bot.\nRegenerate Experimental LaTeX Parser/Lexer\nThe parser and lexer were generated with the ANTLR4\ntoolchain in sympy/parsing/latex/_antlr and checked into the repo.\nPresently, most users should not need to regenerate these files, but\nif you plan to work on this feature, you will need the antlr4\ncommand-line tool (and you must ensure that it is in your PATH).\nOne way to get it is:\n$ conda install -c conda-forge antlr=4.11.1\n\nAlternatively, follow the instructions on the ANTLR website and download\nthe antlr-4.11.1-complete.jar. Then export the CLASSPATH as instructed\nand instead of creating antlr4 as an alias, make it an executable file\nwith the following contents:\n#!/bin/bash\njava -jar /usr/local/lib/antlr-4.11.1-complete.jar \"$@\"\nAfter making changes to sympy/parsing/latex/LaTeX.g4, run:\n$ ./setup.py antlr\n\nClean\nTo clean everything (thus getting the same tree as in the repository):\n$ git clean -Xdf\n\nwhich will clear everything ignored by .gitignore, and:\n$ git clean -df\n\nto clear all untracked files. You can revert the most recent changes in\ngit with:\n$ git reset --hard\n\nWARNING: The above commands will all clear changes you may have made,\nand you will lose them forever. Be sure to check things with git status, git diff, git clean -Xn, and git clean -n before doing any\nof those.\nBugs\nOur issue tracker is at https://github.com/sympy/sympy/issues. Please\nreport any bugs that you find. Or, even better, fork the repository on\nGitHub and create a pull request. We welcome all changes, big or small,\nand we will help you make the pull request if you are new to git (just\nask on our mailing list or Gitter Channel). If you further have any queries, you can find answers\non Stack Overflow using the sympy tag.\nBrief History\nSymPy was started by Ond\u0159ej \u010cert\u00edk in 2005, he wrote some code during\nthe summer, then he wrote some more code during summer 2006. In February\n2007, Fabian Pedregosa joined the project and helped fix many things,\ncontributed documentation, and made it alive again. 5 students (Mateusz\nPaprocki, Brian Jorgensen, Jason Gedge, Robert Schwarz, and Chris Wu)\nimproved SymPy incredibly during summer 2007 as part of the Google\nSummer of Code. Pearu Peterson joined the development during the summer\n2007 and he has made SymPy much more competitive by rewriting the core\nfrom scratch, which has made it from 10x to 100x faster. Jurjen N.E. Bos\nhas contributed pretty-printing and other patches. Fredrik Johansson has\nwritten mpmath and contributed a lot of patches.\nSymPy has participated in every Google Summer of Code since 2007. You\ncan see https://github.com/sympy/sympy/wiki#google-summer-of-code for\nfull details. Each year has improved SymPy by bounds. Most of SymPy's\ndevelopment has come from Google Summer of Code students.\nIn 2011, Ond\u0159ej \u010cert\u00edk stepped down as lead developer, with Aaron\nMeurer, who also started as a Google Summer of Code student, taking his\nplace. Ond\u0159ej \u010cert\u00edk is still active in the community but is too busy\nwith work and family to play a lead development role.\nSince then, a lot more people have joined the development and some\npeople have also left. You can see the full list in doc/src/aboutus.rst,\nor online at:\nhttps://docs.sympy.org/dev/aboutus.html#sympy-development-team\nThe git history goes back to 2007 when development moved from svn to hg.\nTo see the history before that point, look at\nhttps://github.com/sympy/sympy-old.\nYou can use git to see the biggest developers. The command:\n$ git shortlog -ns\n\nwill show each developer, sorted by commits to the project. The command:\n$ git shortlog -ns --since=\"1 year\"\n\nwill show the top developers from the last year.\nCitation\nTo cite SymPy in publications use\n\nMeurer A, Smith CP, Paprocki M, \u010cert\u00edk O, Kirpichev SB, Rocklin M,\nKumar A, Ivanov S, Moore JK, Singh S, Rathnayake T, Vig S, Granger BE,\nMuller RP, Bonazzi F, Gupta H, Vats S, Johansson F, Pedregosa F, Curry\nMJ, Terrel AR, Rou\u010dka \u0160, Saboo A, Fernando I, Kulal S, Cimrman R,\nScopatz A. (2017) SymPy: symbolic computing in Python. PeerJ Computer\nScience 3:e103 https://doi.org/10.7717/peerj-cs.103\n\nA BibTeX entry for LaTeX users is\n@article{10.7717/peerj-cs.103,\n title = {SymPy: symbolic computing in Python},\n author = {Meurer, Aaron and Smith, Christopher P. and Paprocki, Mateusz and \\v{C}ert\\'{i}k, Ond\\v{r}ej and Kirpichev, Sergey B. and Rocklin, Matthew and Kumar, Amit and Ivanov, Sergiu and Moore, Jason K. and Singh, Sartaj and Rathnayake, Thilina and Vig, Sean and Granger, Brian E. and Muller, Richard P. and Bonazzi, Francesco and Gupta, Harsh and Vats, Shivam and Johansson, Fredrik and Pedregosa, Fabian and Curry, Matthew J. and Terrel, Andy R. and Rou\\v{c}ka, \\v{S}t\\v{e}p\\'{a}n and Saboo, Ashutosh and Fernando, Isuru and Kulal, Sumith and Cimrman, Robert and Scopatz, Anthony},\n year = 2017,\n month = Jan,\n keywords = {Python, Computer algebra system, Symbolics},\n abstract = {\n            SymPy is an open-source computer algebra system written in pure Python. It is built with a focus on extensibility and ease of use, through both interactive and programmatic applications. These characteristics have led SymPy to become a popular symbolic library for the scientific Python ecosystem. This paper presents the architecture of SymPy, a description of its features, and a discussion of select submodules. The supplementary material provides additional examples and further outlines details of the architecture and features of SymPy.\n         },\n volume = 3,\n pages = {e103},\n journal = {PeerJ Computer Science},\n issn = {2376-5992},\n url = {https://doi.org/10.7717/peerj-cs.103},\n doi = {10.7717/peerj-cs.103}\n}\nSymPy is BSD licensed, so you are free to use it whatever you like, be\nit academic, commercial, creating forks or derivatives, as long as you\ncopy the BSD statement if you redistribute it (see the LICENSE file for\ndetails). That said, although not required by the SymPy license, if it\nis convenient for you, please cite SymPy when using it in your work and\nalso consider contributing all your changes back, so that we can\nincorporate it and all of us will benefit in the end.\n\n\n", "description": "Symbolic mathematics library for Python."}, {"name": "svgwrite", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nsvgwrite\nAbstract\nInstallation\nDocumentation\nContact\n\n\n\n\n\nREADME.rst\n\n\n\n\nsvgwrite\nThis package is inactive! No new features will be added, there will\nbe no change of behavior, just bugfixes will be merged.\n\nAbstract\nA Python library to create SVG drawings.\na simple example:\nimport svgwrite\n\ndwg = svgwrite.Drawing('test.svg', profile='tiny')\ndwg.add(dwg.line((0, 0), (10, 0), stroke=svgwrite.rgb(10, 10, 16, '%')))\ndwg.add(dwg.text('Test', insert=(0, 0.2), fill='red'))\ndwg.save()\n\nfor more examples see: examples.py\nAs the name svgwrite implies, svgwrite creates new SVG drawings, it does not read existing drawings and also does\nnot import existing drawings, but you can always include other SVG drawings by the <image> entity.\nsvgwrite is a pure Python package and has no external dependencies.\n\nInstallation\nwith pip:\npip install svgwrite\n\nor from source:\npython setup.py install\n\n\nDocumentation\nhttp://readthedocs.org/docs/svgwrite/\nsvgwrite can be found on GitHub.com at:\nhttp://github.com/mozman/svgwrite.git\n\nContact\nsvgwrite@mozman.at\n\n\n", "description": "Creates SVG drawings"}, {"name": "svglib", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSvglib\nA pure-Python library for reading and converting SVG\nAbout\nFeatures\nKnown limitations\nExamples\nDependencies\nInstallation\n1. Using pip\n2. Using conda\n3. Manual installation\nTesting\nBug reports\n\n\n\n\n\nREADME.rst\n\n\n\n\nSvglib\nA pure-Python library for reading and converting SVG\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAbout\nSvglib is a pure-Python library for reading SVG files and converting\nthem (to a reasonable degree) to other formats using the ReportLab Open\nSource toolkit.\nUsed as a package you can read existing SVG files and convert them into\nReportLab Drawing objects that can be used in a variety of contexts,\ne.g. as ReportLab Platypus Flowable objects or in RML.\nAs a command-line tool it converts SVG files into PDF ones (but adding\nother output formats like bitmap or EPS is really easy and will be better\nsupported, soon).\nTests include a huge W3C SVG test suite plus ca. 200 flags from\nWikipedia and some selected symbols from Wikipedia (with increasingly\nless pointing to missing features).\n\nFeatures\n\nconvert SVG files into ReportLab Graphics Drawing objects\nhandle plain or compressed SVG files (.svg and .svgz)\nallow patterns for output files on command-line\ninstall a Python package named svglib\ninstall a Python command-line script named svg2pdf\nprovide a PyTest test suite with over 90% code coverage\ntest entire W3C SVG test suite after pulling from the internet\ntest all SVG flags from Wikipedia after pulling from the internet\ntest selected SVG symbols from Wikipedia after pulling from the net\nsupport Python 3.7+ and PyPy3\n\n\nKnown limitations\n\n@import rules in stylesheets are ignored. CSS is supported, but the range\nof supported attributes is still limited\nclipping is limited to single paths, no mask support\ncolor gradients are not supported (limitation of reportlab)\nSVG ForeignObject elements are not supported.\n\n\nExamples\nYou can use svglib as a Python package e.g. like in the following\ninteractive Python session:\n>>> from svglib.svglib import svg2rlg\n>>> from reportlab.graphics import renderPDF, renderPM\n>>>\n>>> drawing = svg2rlg(\"file.svg\")\n>>> renderPDF.drawToFile(drawing, \"file.pdf\")\n>>> renderPM.drawToFile(drawing, \"file.png\", fmt=\"PNG\")\nNote that the second parameter of drawToFile can be any\nPython file object, like a BytesIO buffer if you don't want the result\nto be written on disk for example.\nIn addition a script named svg2pdf can be used more easily from\nthe system command-line. Here is the output from svg2pdf -h:\nusage: svg2pdf [-h] [-v] [-o PATH_PAT] [PATH [PATH ...]]\n\nsvg2pdf v. x.x.x\nA converter from SVG to PDF (via ReportLab Graphics)\n\npositional arguments:\n  PATH                  Input SVG file path with extension .svg or .svgz.\n\noptional arguments:\n  -h, --help            show this help message and exit\n  -v, --version         Print version number and exit.\n  -o PATH_PAT, --output PATH_PAT\n                        Set output path (incl. the placeholders: dirname,\n                        basename,base, ext, now) in both, %(name)s and {name}\n                        notations.\n\nexamples:\n  # convert path/file.svg to path/file.pdf\n  svg2pdf path/file.svg\n\n  # convert file1.svg to file1.pdf and file2.svgz to file2.pdf\n  svg2pdf file1.svg file2.svgz\n\n  # convert file.svg to out.pdf\n  svg2pdf -o out.pdf file.svg\n\n  # convert all SVG files in path/ to PDF files with names like:\n  # path/file1.svg -> file1.pdf\n  svg2pdf -o \"%(base)s.pdf\" path/file*.svg\n\n  # like before but with timestamp in the PDF files:\n  # path/file1.svg -> path/out-12-58-36-file1.pdf\n  svg2pdf -o {{dirname}}/out-{{now.hour}}-{{now.minute}}-{{now.second}}-%(base)s.pdf path/file*.svg\n\nissues/pull requests:\n    https://github.com/deeplook/svglib\n\nCopyleft by Dinu Gherman, 2008-2021 (LGPL 3):\n    http://www.gnu.org/copyleft/gpl.html\n\n\nDependencies\nSvglib depends mainly on the reportlab package, which provides\nthe abstractions for building complex Drawings which it can render\ninto different fileformats, including PDF, EPS, SVG and various bitmaps\nones. Other dependancies are lxml which is used in the context of SVG\nCSS stylesheets.\n\nInstallation\nThere are three ways to install svglib.\n\n1. Using pip\nWith the pip command on your system and a working internet\nconnection you can install the newest version of svglib with only\none command in a terminal:\n$ pip install svglib\n\nYou can also use pip to install the very latest version of the\nrepository from GitHub, but then you won't be able to conveniently\nrun the test suite:\n$ pip install git+https://github.com/deeplook/svglib\n\n\n2. Using conda\nIf you use Anaconda or Miniconda you are surely using its respective package\nmanager, Conda, as well. In that case you should be able to install svglib\nusing these simple commands:\n$ conda config --add channels conda-forge\n$ conda install svglib\n\nSvglib was kindly packaged for conda by nicoddemus. See here more about\nsvglib with conda.\n\n3. Manual installation\nAlternatively, you can install a tarball like svglib-<version>.tar.gz\nafter downloading it from the svglib page on PyPI or the\nsvglib releases page on GitHub and executing a sequence of commands\nlike shown here:\n$ tar xfz svglib-<version>.tar.gz\n$ cd svglib-<version>\n$ python setup.py install\n\nThis will install a Python package named svglib in the\nsite-packages subfolder of your Python installation and a script\ntool named svg2pdf in your bin directory, e.g. in\n/usr/local/bin.\n\nTesting\nThe svglib tarball distribution contains a PyTest test suite\nin the tests directory. There, in tests/README.rst, you can\nalso read more about testing. You can run the testsuite e.g. like\nshown in the following lines on the command-line:\n$ tar xfz svglib-<version>.tar.gz\n$ cd svglib-<version>\n$ PYTHONPATH=. py.test\n======================== test session starts =========================\nplatform darwin -- Python 3.7.3, pytest-5.0.1, py-1.8.0, pluggy-0.12.0\nrootdir: /Users/dinu/repos/github/deeplook/svglib, inifile:\nplugins: cov-2.4.0\ncollected 36 items\n\ntests/test_basic.py ............................\ntests/test_samples.py .s.s.s.s\n\n=============== 32 passed, 4 skipped in 49.18 seconds ================\n\n\nBug reports\nPlease report bugs on the svglib issue tracker on GitHub (pull\nrequests are also appreciated)!\nIf necessary, please include information about the operating system, as\nwell as the versions of svglib, ReportLab and Python being used!\n\n\n", "description": "Reads and converts SVG images"}, {"name": "statsmodels", "readme": "\n\n   \n   \n\nAbout statsmodels\nstatsmodels is a Python package that provides a complement to scipy for\nstatistical computations including descriptive statistics and estimation\nand inference for statistical models.\n\n\nDocumentation\nThe documentation for the latest release is at\nhttps://www.statsmodels.org/stable/\nThe documentation for the development version is at\nhttps://www.statsmodels.org/dev/\nRecent improvements are highlighted in the release notes\nhttps://www.statsmodels.org/stable/release/\nBackups of documentation are available at https://statsmodels.github.io/stable/\nand https://statsmodels.github.io/dev/.\n\n\nMain Features\n\nLinear regression models:\n\nOrdinary least squares\nGeneralized least squares\nWeighted least squares\nLeast squares with autoregressive errors\nQuantile regression\nRecursive least squares\n\n\nMixed Linear Model with mixed effects and variance components\nGLM: Generalized linear models with support for all of the one-parameter\nexponential family distributions\nBayesian Mixed GLM for Binomial and Poisson\nGEE: Generalized Estimating Equations for one-way clustered or longitudinal data\nDiscrete models:\n\nLogit and Probit\nMultinomial logit (MNLogit)\nPoisson and Generalized Poisson regression\nNegative Binomial regression\nZero-Inflated Count models\n\n\nRLM: Robust linear models with support for several M-estimators.\nTime Series Analysis: models for time series analysis\n\nComplete StateSpace modeling framework\n\nSeasonal ARIMA and ARIMAX models\nVARMA and VARMAX models\nDynamic Factor models\nUnobserved Component models\n\n\nMarkov switching models (MSAR), also known as Hidden Markov Models (HMM)\nUnivariate time series analysis: AR, ARIMA\nVector autoregressive models, VAR and structural VAR\nVector error correction model, VECM\nexponential smoothing, Holt-Winters\nHypothesis tests for time series: unit root, cointegration and others\nDescriptive statistics and process models for time series analysis\n\n\nSurvival analysis:\n\nProportional hazards regression (Cox models)\nSurvivor function estimation (Kaplan-Meier)\nCumulative incidence function estimation\n\n\nMultivariate:\n\nPrincipal Component Analysis with missing data\nFactor Analysis with rotation\nMANOVA\nCanonical Correlation\n\n\nNonparametric statistics: Univariate and multivariate kernel density estimators\nDatasets: Datasets used for examples and in testing\nStatistics: a wide range of statistical tests\n\ndiagnostics and specification tests\ngoodness-of-fit and normality tests\nfunctions for multiple testing\nvarious additional statistical tests\n\n\nImputation with MICE, regression on order statistic and Gaussian imputation\nMediation analysis\nGraphics includes plot functions for visual analysis of data and model results\nI/O\n\nTools for reading Stata .dta files, but pandas has a more recent version\nTable output to ascii, latex, and html\n\n\nMiscellaneous models\nSandbox: statsmodels contains a sandbox folder with code in various stages of\ndevelopment and testing which is not considered \u201cproduction ready\u201d.  This covers\namong others\n\nGeneralized method of moments (GMM) estimators\nKernel regression\nVarious extensions to scipy.stats.distributions\nPanel data models\nInformation theoretic measures\n\n\n\n\n\nHow to get it\nThe main branch on GitHub is the most up to date code\nhttps://www.github.com/statsmodels/statsmodels\nSource download of release tags are available on GitHub\nhttps://github.com/statsmodels/statsmodels/tags\nBinaries and source distributions are available from PyPi\nhttps://pypi.org/project/statsmodels/\nBinaries can be installed in Anaconda\nconda install statsmodels\n\n\nInstalling from sources\nSee INSTALL.txt for requirements or see the documentation\nhttps://statsmodels.github.io/dev/install.html\n\n\nContributing\nContributions in any form are welcome, including:\n\nDocumentation improvements\nAdditional tests\nNew features to existing models\nNew models\n\nhttps://www.statsmodels.org/stable/dev/test_notes\nfor instructions on installing statsmodels in editable mode.\n\n\nLicense\nModified BSD (3-clause)\n\n\nDiscussion and Development\nDiscussions take place on the mailing list\nhttps://groups.google.com/group/pystatsmodels\nand in the issue tracker. We are very interested in feedback\nabout usability and suggestions for improvements.\n\n\nBug Reports\nBug reports can be submitted to the issue tracker at\nhttps://github.com/statsmodels/statsmodels/issues\n\n", "description": "Statistical modeling and econometrics in Python.", "category": "Data analysis/science"}, {"name": "starlette", "readme": "\n\n\n\n\n\u2728 The little ASGI framework that shines. \u2728\n\n\n\n\n\n\n\n\n\n\nDocumentation: https://www.starlette.io/\n\nStarlette\nStarlette is a lightweight ASGI framework/toolkit,\nwhich is ideal for building async web services in Python.\nIt is production-ready, and gives you the following:\n\nA lightweight, low-complexity HTTP web framework.\nWebSocket support.\nIn-process background tasks.\nStartup and shutdown events.\nTest client built on httpx.\nCORS, GZip, Static Files, Streaming responses.\nSession and Cookie support.\n100% test coverage.\n100% type annotated codebase.\nFew hard dependencies.\nCompatible with asyncio and trio backends.\nGreat overall performance against independent benchmarks.\n\nRequirements\nPython 3.8+\nInstallation\n$ pip3 install starlette\n\nYou'll also want to install an ASGI server, such as uvicorn, daphne, or hypercorn.\n$ pip3 install uvicorn\n\nExample\nexample.py:\nfrom starlette.applications import Starlette\nfrom starlette.responses import JSONResponse\nfrom starlette.routing import Route\n\n\nasync def homepage(request):\n    return JSONResponse({'hello': 'world'})\n\nroutes = [\n    Route(\"/\", endpoint=homepage)\n]\n\napp = Starlette(debug=True, routes=routes)\n\nThen run the application using Uvicorn:\n$ uvicorn example:app\n\nFor a more complete example, see encode/starlette-example.\nDependencies\nStarlette only requires anyio, and the following are optional:\n\nhttpx - Required if you want to use the TestClient.\njinja2 - Required if you want to use Jinja2Templates.\npython-multipart - Required if you want to support form parsing, with request.form().\nitsdangerous - Required for SessionMiddleware support.\npyyaml - Required for SchemaGenerator support.\n\nYou can install all of these with pip3 install starlette[full].\nFramework or Toolkit\nStarlette is designed to be used either as a complete framework, or as\nan ASGI toolkit. You can use any of its components independently.\nfrom starlette.responses import PlainTextResponse\n\n\nasync def app(scope, receive, send):\n    assert scope['type'] == 'http'\n    response = PlainTextResponse('Hello, world!')\n    await response(scope, receive, send)\n\nRun the app application in example.py:\n$ uvicorn example:app\nINFO: Started server process [11509]\nINFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)\n\nRun uvicorn with --reload to enable auto-reloading on code changes.\nModularity\nThe modularity that Starlette is designed on promotes building re-usable\ncomponents that can be shared between any ASGI framework. This should enable\nan ecosystem of shared middleware and mountable applications.\nThe clean API separation also means it's easier to understand each component\nin isolation.\n\nStarlette is BSD licensed code.Designed & crafted with care.\u2014 \u2b50\ufe0f \u2014\n", "description": "ASGI framework/toolkit for building async web services"}, {"name": "stack-data", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nstack_data\nBasic usage\nVariables\nRendering lines with ranges and markers\nSyntax highlighting with Pygments\nGetting the full stack\n\n\n\n\n\nREADME.md\n\n\n\n\nstack_data\n  \nThis is a library that extracts data from stack frames and tracebacks, particularly to display more useful tracebacks than the default. It powers the tracebacks in IPython and futurecoder:\n\nYou can install it from PyPI:\npip install stack_data\n\nBasic usage\nHere's some code we'd like to inspect:\ndef foo():\n    result = []\n    for i in range(5):\n        row = []\n        result.append(row)\n        print_stack()\n        for j in range(5):\n            row.append(i * j)\n    return result\nNote that foo calls a function print_stack(). In reality we can imagine that an exception was raised at this line, or a debugger stopped there, but this is easy to play with directly. Here's a basic implementation:\nimport inspect\nimport stack_data\n\n\ndef print_stack():\n    frame = inspect.currentframe().f_back\n    frame_info = stack_data.FrameInfo(frame)\n    print(f\"{frame_info.code.co_name} at line {frame_info.lineno}\")\n    print(\"-----------\")\n    for line in frame_info.lines:\n        print(f\"{'-->' if line.is_current else '   '} {line.lineno:4} | {line.render()}\")\n(Beware that this has a major bug - it doesn't account for line gaps, which we'll learn about later)\nThe output of one call to print_stack() looks like:\nfoo at line 9\n-----------\n       6 | for i in range(5):\n       7 |     row = []\n       8 |     result.append(row)\n-->    9 |     print_stack()\n      10 |     for j in range(5):\n\nThe code for print_stack() is fairly self-explanatory. If you want to learn more details about a particular class or method I suggest looking through some docstrings. FrameInfo is a class that accepts either a frame or a traceback object and provides a bunch of nice attributes and properties (which are cached so you don't need to worry about performance). In particular frame_info.lines is a list of Line objects. line.render() returns the source code of that line suitable for display. Without any arguments it simply strips any common leading indentation. Later on we'll see a more powerful use for it.\nYou can see that frame_info.lines includes some lines of surrounding context. By default it includes 3 pieces of context before the main line and 1 piece after. We can configure the amount of context by passing options:\noptions = stack_data.Options(before=1, after=0)\nframe_info = stack_data.FrameInfo(frame, options)\nThen the output looks like:\nfoo at line 9\n-----------\n       8 | result.append(row)\n-->    9 | print_stack()\n\nNote that these parameters are not the number of lines before and after to include, but the number of pieces. A piece is a range of one or more lines in a file that should logically be grouped together. A piece contains either a single simple statement or a part of a compound statement (loops, if, try/except, etc) that doesn't contain any other statements. Most pieces are a single line, but a multi-line statement or if condition is a single piece. In the example above, all pieces are one line, because nothing is spread across multiple lines. If we change our code to include some multiline bits:\ndef foo():\n    result = []\n    for i in range(5):\n        row = []\n        result.append(\n            row\n        )\n        print_stack()\n        for j in range(\n                5\n        ):\n            row.append(i * j)\n    return result\nand then run the original code with the default options, then the output is:\nfoo at line 11\n-----------\n       6 | for i in range(5):\n       7 |     row = []\n       8 |     result.append(\n       9 |         row\n      10 |     )\n-->   11 |     print_stack()\n      12 |     for j in range(\n      13 |             5\n      14 |     ):\n\nNow lines 8-10 and lines 12-14 are each a single piece. Note that the output is essentially the same as the original in terms of the amount of code. The division of files into pieces means that the edge of the context is intuitive and doesn't crop out parts of statements or expressions. For example, if context was measured in lines instead of pieces, the last line of the above would be for j in range( which is much less useful.\nHowever, if a piece is very long, including all of it could be cumbersome. For this, Options has a parameter max_lines_per_piece, which is 6 by default. Suppose we have a piece in our code that's longer than that:\n        row = [\n            1,\n            2,\n            3,\n            4,\n            5,\n        ]\nframe_info.lines will truncate this piece so that instead of 7 Line objects it will produce 5 Line objects and one LINE_GAP in the middle, making 6 objects in total for the piece. Our code doesn't currently handle gaps, so it will raise an exception. We can modify it like so:\n    for line in frame_info.lines:\n        if line is stack_data.LINE_GAP:\n            print(\"       (...)\")\n        else:\n            print(f\"{'-->' if line.is_current else '   '} {line.lineno:4} | {line.render()}\")\nNow the output looks like:\nfoo at line 15\n-----------\n       6 | for i in range(5):\n       7 |     row = [\n       8 |         1,\n       9 |         2,\n       (...)\n      12 |         5,\n      13 |     ]\n      14 |     result.append(row)\n-->   15 |     print_stack()\n      16 |     for j in range(5):\n\nAlternatively, you can flip the condition around and check if isinstance(line, stack_data.Line):. Either way, you should always check for line gaps, or your code may appear to work at first but fail when it encounters a long piece.\nNote that the executing piece, i.e. the piece containing the current line being executed (line 15 in this case) is never truncated, no matter how long it is.\nThe lines of context never stray outside frame_info.scope, which is the innermost function or class definition containing the current line. For example, this is the output for a short function which has neither 3 lines before nor 1 line after the current line:\nbar at line 6\n-----------\n       4 | def bar():\n       5 |     foo()\n-->    6 |     print_stack()\n\nSometimes it's nice to ensure that the function signature is always showing. This can be done with Options(include_signature=True). The result looks like this:\nfoo at line 14\n-----------\n       9 | def foo():\n       (...)\n      11 |     for i in range(5):\n      12 |         row = []\n      13 |         result.append(row)\n-->   14 |         print_stack()\n      15 |         for j in range(5):\n\nTo avoid wasting space, pieces never start or end with a blank line, and blank lines between pieces are excluded. So if our code looks like this:\n    for i in range(5):\n        row = []\n\n        result.append(row)\n        print_stack()\n\n        for j in range(5):\nThe output doesn't change much, except you can see jumps in the line numbers:\n      11 |     for i in range(5):\n      12 |         row = []\n      14 |         result.append(row)\n-->   15 |         print_stack()\n      17 |         for j in range(5):\n\nVariables\nYou can also inspect variables and other expressions in a frame, e.g:\n    for var in frame_info.variables:\n        print(f\"{var.name} = {repr(var.value)}\")\nwhich may output:\nresult = [[0, 0, 0, 0, 0], [0, 1, 2, 3, 4], [0, 2, 4, 6, 8], [0, 3, 6, 9, 12], []]\ni = 4\nrow = []\nj = 4\nframe_info.variables returns a list of Variable objects, which have attributes name, value, and nodes, which is a list of all AST representing that expression.\nA Variable may refer to an expression other than a simple variable name. It can be any expression evaluated by the library pure_eval which it deems 'interesting' (see those docs for more info). This includes expressions like foo.bar or foo[bar]. In these cases name is the source code of that expression. pure_eval ensures that it only evaluates expressions that won't have any side effects, e.g. where foo.bar is a normal attribute rather than a descriptor such as a property.\nframe_info.variables is a list of all the interesting expressions found in frame_info.scope, e.g. the current function, which may include expressions not visible in frame_info.lines. You can restrict the list by using frame_info.variables_in_lines or even frame_info.variables_in_executing_piece. For more control you can use frame_info.variables_by_lineno. See the docstrings for more information.\nRendering lines with ranges and markers\nSometimes you may want to insert special characters into the text for display purposes, e.g. HTML or ANSI color codes. stack_data provides a few tools to make this easier.\nLet's say we have a Line object where line.text (the original raw source code of that line) is \"foo = bar\", so line.text[6:9] is \"bar\", and we want to emphasise that part by inserting HTML at positions 6 and 9 in the text. Here's how we can do that directly:\nmarkers = [\n    stack_data.MarkerInLine(position=6, is_start=True, string=\"<b>\"),\n    stack_data.MarkerInLine(position=9, is_start=False, string=\"</b>\"),\n]\nline.render(markers)  # returns \"foo = <b>bar</b>\"\nHere is_start=True indicates that the marker is the first of a pair. This helps line.render() sort and insert the markers correctly so you don't end up with malformed HTML like foo<b>.<i></b>bar</i> where tags overlap.\nSince we're inserting HTML, we should actually use line.render(markers, escape_html=True) which will escape special HTML characters in the Python source (but not the markers) so for example foo = bar < spam would be rendered as foo = <b>bar</b> &lt; spam.\nUsually though you wouldn't create markers directly yourself. Instead you would start with one or more ranges and then convert them, like so:\nranges = [\n    stack_data.RangeInLine(start=0, end=3, data=\"foo\"),\n    stack_data.RangeInLine(start=6, end=9, data=\"bar\"),\n]\n\ndef convert_ranges(r):\n    if r.data == \"bar\":\n        return \"<b>\", \"</b>\"        \n\n# This results in `markers` being the same as in the above example.\nmarkers = stack_data.markers_from_ranges(ranges, convert_ranges)\nRangeInLine has a data attribute which can be any object. markers_from_ranges accepts a converter function to which it passes all the RangeInLine objects. If the converter function returns a pair of strings, it creates two markers from them. Otherwise it should return None to indicate that the range should be ignored, as with the first range containing \"foo\" in this example.\nThe reason this is useful is because there are built in tools to create these ranges for you. For example, if we change our print_stack() function to contain this:\ndef convert_variable_ranges(r):\n    variable, _node = r.data\n    return f'<span data-value=\"{repr(variable.value)}\">', '</span>'\n\nmarkers = stack_data.markers_from_ranges(line.variable_ranges, convert_variable_ranges)\nprint(f\"{'-->' if line.is_current else '   '} {line.lineno:4} | {line.render(markers, escape_html=True)}\")\nThen the output becomes:\nfoo at line 15\n-----------\n       9 | def foo():\n       (...)\n      11 |     for <span data-value=\"4\">i</span> in range(5):\n      12 |         <span data-value=\"[]\">row</span> = []\n      14 |         <span data-value=\"[[0, 0, 0, 0, 0], [0, 1, 2, 3, 4], [0, 2, 4, 6, 8], [0, 3, 6, 9, 12], []]\">result</span>.append(<span data-value=\"[]\">row</span>)\n-->   15 |         print_stack()\n      17 |         for <span data-value=\"4\">j</span> in range(5):\n\nline.variable_ranges is a list of RangeInLines for each Variable that appears at least partially in this line. The data attribute of the range is a pair (variable, node) where node is the particular AST node from the list variable.nodes that corresponds to this range.\nYou can also use line.token_ranges (e.g. if you want to do your own syntax highlighting) or line.executing_node_ranges if you want to highlight the currently executing node identified by the executing library. Or if you want to make your own range from an AST node, use line.range_from_node(node, data). See the docstrings for more info.\nSyntax highlighting with Pygments\nIf you'd like pretty colored text without the work, you can let Pygments do it for you. Just follow these steps:\n\npip install pygments separately as it's not a dependency of stack_data.\nCreate a pygments formatter object such as HtmlFormatter or Terminal256Formatter.\nPass the formatter to Options in the argument pygments_formatter.\nUse line.render(pygmented=True) to get your formatted text. In this case you can't pass any markers to render.\n\nIf you want, you can also highlight the executing node in the frame in combination with the pygments syntax highlighting. For this you will need:\n\nA pygments style - either a style class or a string that names it. See the documentation on styles and the styles gallery.\nA modification to make to the style for the executing node, which is a string such as \"bold\" or \"bg:#ffff00\" (yellow background). See the documentation on style rules.\nPass these two things to stack_data.style_with_executing_node(style, modifier) to get a new style class.\nPass the new style to your formatter when you create it.\n\nNote that this doesn't work with TerminalFormatter which just uses the basic ANSI colors and doesn't use the style passed to it in general.\nGetting the full stack\nCurrently print_stack() doesn't actually print the stack, it just prints one frame. Instead of frame_info = FrameInfo(frame, options), let's do this:\nfor frame_info in FrameInfo.stack_data(frame, options):\nNow the output looks something like this:\n<module> at line 18\n-----------\n      14 |         for j in range(5):\n      15 |             row.append(i * j)\n      16 |     return result\n-->   18 | bar()\n\nbar at line 5\n-----------\n       4 | def bar():\n-->    5 |     foo()\n\nfoo at line 13\n-----------\n      10 | for i in range(5):\n      11 |     row = []\n      12 |     result.append(row)\n-->   13 |     print_stack()\n      14 |     for j in range(5):\n\nHowever, just as frame_info.lines doesn't always yield Line objects, FrameInfo.stack_data doesn't always yield FrameInfo objects, and we must modify our code to handle that. Let's look at some different sample code:\ndef factorial(x):\n    return x * factorial(x - 1)\n\n\ntry:\n    print(factorial(5))\nexcept:\n    print_stack()\nIn this code we've forgotten to include a base case in our factorial function so it will fail with a RecursionError and there'll be many frames with similar information. Similar to the built in Python traceback, stack_data avoids showing all of these frames. Instead you will get a RepeatedFrames object which summarises the information. See its docstring for more details.\nHere is our updated implementation:\ndef print_stack():\n    for frame_info in FrameInfo.stack_data(sys.exc_info()[2]):\n        if isinstance(frame_info, FrameInfo):\n            print(f\"{frame_info.code.co_name} at line {frame_info.lineno}\")\n            print(\"-----------\")\n            for line in frame_info.lines:\n                print(f\"{'-->' if line.is_current else '   '} {line.lineno:4} | {line.render()}\")\n\n            for var in frame_info.variables:\n                print(f\"{var.name} = {repr(var.value)}\")\n\n            print()\n        else:\n            print(f\"... {frame_info.description} ...\\n\")\nAnd the output:\n<module> at line 9\n-----------\n       4 | def factorial(x):\n       5 |     return x * factorial(x - 1)\n       8 | try:\n-->    9 |     print(factorial(5))\n      10 | except:\n\nfactorial at line 5\n-----------\n       4 | def factorial(x):\n-->    5 |     return x * factorial(x - 1)\nx = 5\n\nfactorial at line 5\n-----------\n       4 | def factorial(x):\n-->    5 |     return x * factorial(x - 1)\nx = 4\n\n... factorial at line 5 (996 times) ...\n\nfactorial at line 5\n-----------\n       4 | def factorial(x):\n-->    5 |     return x * factorial(x - 1)\nx = -993\n\nIn addition to handling repeated frames, we've passed a traceback object to FrameInfo.stack_data instead of a frame.\nIf you want, you can pass collapse_repeated_frames=False to FrameInfo.stack_data (not to Options) and it will just yield FrameInfo objects for the full stack.\n\n\n"}, {"name": "srsly", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nsrsly: Modern high-performance serialization utilities for Python\nMotivation\nInstallation\nAPI\nJSON\nfunctionsrsly.json_dumps\nfunctionsrsly.json_loads\nfunctionsrsly.write_json\nfunctionsrsly.read_json\nfunctionsrsly.write_gzip_json\nfunctionsrsly.write_gzip_jsonl\nfunctionsrsly.read_gzip_json\nfunctionsrsly.read_gzip_jsonl\nfunctionsrsly.write_jsonl\nfunctionsrsly.read_jsonl\nfunctionsrsly.is_json_serializable\nmsgpack\nfunctionsrsly.msgpack_dumps\nfunctionsrsly.msgpack_loads\nfunctionsrsly.write_msgpack\nfunctionsrsly.read_msgpack\npickle\nfunctionsrsly.pickle_dumps\nfunctionsrsly.pickle_loads\nYAML\nfunctionsrsly.yaml_dumps\nfunctionsrsly.yaml_loads\nfunctionsrsly.write_yaml\nfunctionsrsly.read_yaml\nfunctionsrsly.is_yaml_serializable\n\n\n\n\n\nREADME.md\n\n\n\n\n\nsrsly: Modern high-performance serialization utilities for Python\nThis package bundles some of the best Python serialization libraries into one\nstandalone package, with a high-level API that makes it easy to write code\nthat's correct across platforms and Pythons. This allows us to provide all the\nserialization utilities we need in a single binary wheel. Currently supports\nJSON, JSONL, MessagePack, Pickle and YAML.\n\n\n\n\n\nMotivation\nSerialization is hard, especially across Python versions and multiple platforms.\nAfter dealing with many subtle bugs over the years (encodings, locales, large\nfiles) our libraries like spaCy and\nProdigy had steadily grown a number of utility functions to\nwrap the multiple serialization formats we need to support (especially json,\nmsgpack and pickle). These wrapping functions ended up duplicated across our\ncodebases, so we wanted to put them in one place.\nAt the same time, we noticed that having a lot of small dependencies was making\nmaintenance harder, and making installation slower. To solve this, we've made\nsrsly standalone, by including the component packages directly within it. This\nway we can provide all the serialization utilities we need in a single binary\nwheel.\nsrsly currently includes forks of the following packages:\n\nujson\nmsgpack\nmsgpack-numpy\ncloudpickle\nruamel.yaml (without unsafe\nimplementations!)\n\nInstallation\n\n\u26a0\ufe0f Note that v2.x is only compatible with Python 3.6+. For 2.7+\ncompatibility, use v1.x.\n\nsrsly can be installed from pip. Before installing, make sure that your pip,\nsetuptools and wheel are up to date.\npython -m pip install -U pip setuptools wheel\npython -m pip install srsly\nOr from conda via conda-forge:\nconda install -c conda-forge srsly\nAlternatively, you can also compile the library from source. You'll need to make\nsure that you have a development environment with a Python distribution\nincluding header files, a compiler (XCode command-line tools on macOS / OS X or\nVisual C++ build tools on Windows), pip and git installed.\nInstall from source:\n# clone the repo\ngit clone https://github.com/explosion/srsly\ncd srsly\n\n# create a virtual environment\npython -m venv .env\nsource .env/bin/activate\n\n# update pip\npython -m pip install -U pip setuptools wheel\n\n# compile and install from source\npython -m pip install .\nFor developers, install requirements separately and then install in editable\nmode without build isolation:\n# install in editable mode\npython -m pip install -r requirements.txt\npython -m pip install --no-build-isolation --editable .\n\n# run test suite\npython -m pytest --pyargs srsly\nAPI\nJSON\n\n\ud83d\udce6 The underlying module is exposed via srsly.ujson. However, we normally\ninteract with it via the utility functions only.\n\nfunction srsly.json_dumps\nSerialize an object to a JSON string. Falls back to json if sort_keys=True\nis used (until it's fixed in ujson).\ndata = {\"foo\": \"bar\", \"baz\": 123}\njson_string = srsly.json_dumps(data)\n\n\n\nArgument\nType\nDescription\n\n\n\n\ndata\n-\nThe JSON-serializable data to output.\n\n\nindent\nint\nNumber of spaces used to indent JSON. Defaults to 0.\n\n\nsort_keys\nbool\nSort dictionary keys. Defaults to False.\n\n\nRETURNS\nstr\nThe serialized string.\n\n\n\nfunction srsly.json_loads\nDeserialize unicode or bytes to a Python object.\ndata = '{\"foo\": \"bar\", \"baz\": 123}'\nobj = srsly.json_loads(data)\n\n\n\nArgument\nType\nDescription\n\n\n\n\ndata\nstr / bytes\nThe data to deserialize.\n\n\nRETURNS\n-\nThe deserialized Python object.\n\n\n\nfunction srsly.write_json\nCreate a JSON file and dump contents or write to standard output.\ndata = {\"foo\": \"bar\", \"baz\": 123}\nsrsly.write_json(\"/path/to/file.json\", data)\n\n\n\nArgument\nType\nDescription\n\n\n\n\npath\nstr / Path\nThe file path or \"-\" to write to stdout.\n\n\ndata\n-\nThe JSON-serializable data to output.\n\n\nindent\nint\nNumber of spaces used to indent JSON. Defaults to 2.\n\n\n\nfunction srsly.read_json\nLoad JSON from a file or standard input.\ndata = srsly.read_json(\"/path/to/file.json\")\n\n\n\nArgument\nType\nDescription\n\n\n\n\npath\nstr / Path\nThe file path or \"-\" to read from stdin.\n\n\nRETURNS\ndict / list\nThe loaded JSON content.\n\n\n\nfunction srsly.write_gzip_json\nCreate a gzipped JSON file and dump contents.\ndata = {\"foo\": \"bar\", \"baz\": 123}\nsrsly.write_gzip_json(\"/path/to/file.json.gz\", data)\n\n\n\nArgument\nType\nDescription\n\n\n\n\npath\nstr / Path\nThe file path.\n\n\ndata\n-\nThe JSON-serializable data to output.\n\n\nindent\nint\nNumber of spaces used to indent JSON. Defaults to 2.\n\n\n\nfunction srsly.write_gzip_jsonl\nCreate a gzipped JSONL file and dump contents.\ndata = [{\"foo\": \"bar\"}, {\"baz\": 123}]\nsrsly.write_gzip_json(\"/path/to/file.jsonl.gz\", data)\n\n\n\nArgument\nType\nDescription\n\n\n\n\npath\nstr / Path\nThe file path.\n\n\nlines\n-\nThe JSON-serializable contents of each line.\n\n\nappend\nbool\nWhether or not to append to the location. Appending to .gz files is generally not recommended, as it doesn't allow the algorithm to take advantage of all data when compressing - files may hence be poorly compressed.\n\n\nappend_new_line\nbool\nWhether or not to write a new line before appending to the file.\n\n\n\nfunction srsly.read_gzip_json\nLoad gzipped JSON from a file.\ndata = srsly.read_gzip_json(\"/path/to/file.json.gz\")\n\n\n\nArgument\nType\nDescription\n\n\n\n\npath\nstr / Path\nThe file path.\n\n\nRETURNS\ndict / list\nThe loaded JSON content.\n\n\n\nfunction srsly.read_gzip_jsonl\nLoad gzipped JSONL from a file.\ndata = srsly.read_gzip_jsonl(\"/path/to/file.jsonl.gz\")\n\n\n\nArgument\nType\nDescription\n\n\n\n\npath\nstr / Path\nThe file path.\n\n\nRETURNS\ndict / list\nThe loaded JSONL content.\n\n\n\nfunction srsly.write_jsonl\nCreate a JSONL file (newline-delimited JSON) and dump contents line by line, or\nwrite to standard output.\ndata = [{\"foo\": \"bar\"}, {\"baz\": 123}]\nsrsly.write_jsonl(\"/path/to/file.jsonl\", data)\n\n\n\nArgument\nType\nDescription\n\n\n\n\npath\nstr / Path\nThe file path or \"-\" to write to stdout.\n\n\nlines\niterable\nThe JSON-serializable lines.\n\n\nappend\nbool\nAppend to an existing file. Will open it in \"a\" mode and insert a newline before writing lines. Defaults to False.\n\n\nappend_new_line\nbool\nDefines whether a new line should first be written when appending to an existing file. Defaults to True.\n\n\n\nfunction srsly.read_jsonl\nRead a JSONL file (newline-delimited JSON) or from JSONL data from standard\ninput and yield contents line by line. Blank lines will always be skipped.\ndata = srsly.read_jsonl(\"/path/to/file.jsonl\")\n\n\n\nArgument\nType\nDescription\n\n\n\n\npath\nstr / Path\nThe file path or \"-\" to read from stdin.\n\n\nskip\nbool\nSkip broken lines and don't raise ValueError. Defaults to False.\n\n\nYIELDS\n-\nThe loaded JSON contents of each line.\n\n\n\nfunction srsly.is_json_serializable\nCheck if a Python object is JSON-serializable.\nassert srsly.is_json_serializable({\"hello\": \"world\"}) is True\nassert srsly.is_json_serializable(lambda x: x) is False\n\n\n\nArgument\nType\nDescription\n\n\n\n\nobj\n-\nThe object to check.\n\n\nRETURNS\nbool\nWhether the object is JSON-serializable.\n\n\n\nmsgpack\n\n\ud83d\udce6 The underlying module is exposed via srsly.msgpack. However, we normally\ninteract with it via the utility functions only.\n\nfunction srsly.msgpack_dumps\nSerialize an object to a msgpack byte string.\ndata = {\"foo\": \"bar\", \"baz\": 123}\nmsg = srsly.msgpack_dumps(data)\n\n\n\nArgument\nType\nDescription\n\n\n\n\ndata\n-\nThe data to serialize.\n\n\nRETURNS\nbytes\nThe serialized bytes.\n\n\n\nfunction srsly.msgpack_loads\nDeserialize msgpack bytes to a Python object.\nmsg = b\"\\x82\\xa3foo\\xa3bar\\xa3baz{\"\ndata = srsly.msgpack_loads(msg)\n\n\n\nArgument\nType\nDescription\n\n\n\n\ndata\nbytes\nThe data to deserialize.\n\n\nuse_list\nbool\nDon't use tuples instead of lists. Can make deserialization slower. Defaults to True.\n\n\nRETURNS\n-\nThe deserialized Python object.\n\n\n\nfunction srsly.write_msgpack\nCreate a msgpack file and dump contents.\ndata = {\"foo\": \"bar\", \"baz\": 123}\nsrsly.write_msgpack(\"/path/to/file.msg\", data)\n\n\n\nArgument\nType\nDescription\n\n\n\n\npath\nstr / Path\nThe file path.\n\n\ndata\n-\nThe data to serialize.\n\n\n\nfunction srsly.read_msgpack\nLoad a msgpack file.\ndata = srsly.read_msgpack(\"/path/to/file.msg\")\n\n\n\nArgument\nType\nDescription\n\n\n\n\npath\nstr / Path\nThe file path.\n\n\nuse_list\nbool\nDon't use tuples instead of lists. Can make deserialization slower. Defaults to True.\n\n\nRETURNS\n-\nThe loaded and deserialized content.\n\n\n\npickle\n\n\ud83d\udce6 The underlying module is exposed via srsly.cloudpickle. However, we\nnormally interact with it via the utility functions only.\n\nfunction srsly.pickle_dumps\nSerialize a Python object with pickle.\ndata = {\"foo\": \"bar\", \"baz\": 123}\npickled_data = srsly.pickle_dumps(data)\n\n\n\nArgument\nType\nDescription\n\n\n\n\ndata\n-\nThe object to serialize.\n\n\nprotocol\nint\nProtocol to use. -1 for highest. Defaults to None.\n\n\nRETURNS\nbytes\nThe serialized object.\n\n\n\nfunction srsly.pickle_loads\nDeserialize bytes with pickle.\npickled_data = b\"\\x80\\x04\\x95\\x19\\x00\\x00\\x00\\x00\\x00\\x00\\x00}\\x94(\\x8c\\x03foo\\x94\\x8c\\x03bar\\x94\\x8c\\x03baz\\x94K{u.\"\ndata = srsly.pickle_loads(pickled_data)\n\n\n\nArgument\nType\nDescription\n\n\n\n\ndata\nbytes\nThe data to deserialize.\n\n\nRETURNS\n-\nThe deserialized Python object.\n\n\n\nYAML\n\n\ud83d\udce6 The underlying module is exposed via srsly.ruamel_yaml. However, we\nnormally interact with it via the utility functions only.\n\nfunction srsly.yaml_dumps\nSerialize an object to a YAML string. See the\nruamel.yaml docs\nfor details on the indentation format.\ndata = {\"foo\": \"bar\", \"baz\": 123}\nyaml_string = srsly.yaml_dumps(data)\n\n\n\nArgument\nType\nDescription\n\n\n\n\ndata\n-\nThe JSON-serializable data to output.\n\n\nindent_mapping\nint\nMapping indentation. Defaults to 2.\n\n\nindent_sequence\nint\nSequence indentation. Defaults to 4.\n\n\nindent_offset\nint\nIndentation offset. Defaults to 2.\n\n\nsort_keys\nbool\nSort dictionary keys. Defaults to False.\n\n\nRETURNS\nstr\nThe serialized string.\n\n\n\nfunction srsly.yaml_loads\nDeserialize unicode or a file object to a Python object.\ndata = 'foo: bar\\nbaz: 123'\nobj = srsly.yaml_loads(data)\n\n\n\nArgument\nType\nDescription\n\n\n\n\ndata\nstr / file\nThe data to deserialize.\n\n\nRETURNS\n-\nThe deserialized Python object.\n\n\n\nfunction srsly.write_yaml\nCreate a YAML file and dump contents or write to standard output.\ndata = {\"foo\": \"bar\", \"baz\": 123}\nsrsly.write_yaml(\"/path/to/file.yml\", data)\n\n\n\nArgument\nType\nDescription\n\n\n\n\npath\nstr / Path\nThe file path or \"-\" to write to stdout.\n\n\ndata\n-\nThe JSON-serializable data to output.\n\n\nindent_mapping\nint\nMapping indentation. Defaults to 2.\n\n\nindent_sequence\nint\nSequence indentation. Defaults to 4.\n\n\nindent_offset\nint\nIndentation offset. Defaults to 2.\n\n\nsort_keys\nbool\nSort dictionary keys. Defaults to False.\n\n\n\nfunction srsly.read_yaml\nLoad YAML from a file or standard input.\ndata = srsly.read_yaml(\"/path/to/file.yml\")\n\n\n\nArgument\nType\nDescription\n\n\n\n\npath\nstr / Path\nThe file path or \"-\" to read from stdin.\n\n\nRETURNS\ndict / list\nThe loaded YAML content.\n\n\n\nfunction srsly.is_yaml_serializable\nCheck if a Python object is YAML-serializable.\nassert srsly.is_yaml_serializable({\"hello\": \"world\"}) is True\nassert srsly.is_yaml_serializable(lambda x: x) is False\n\n\n\nArgument\nType\nDescription\n\n\n\n\nobj\n-\nThe object to check.\n\n\nRETURNS\nbool\nWhether the object is YAML-serializable.\n\n\n\n\n\n", "description": "Serialization utilities for JSON/JSONL/MessagePack/Pickle/YAML"}, {"name": "SpeechRecognition", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSpeechRecognition\nLibrary Reference\nExamples\nInstalling\nRequirements\nPython\nPyAudio (for microphone users)\nPocketSphinx-Python (for Sphinx users)\nVosk (for Vosk users)\nGoogle Cloud Speech Library for Python (for Google Cloud Speech API users)\nFLAC (for some systems)\nWhisper (for Whisper users)\nWhisper API (for Whisper API users)\nTroubleshooting\nThe recognizer tries to recognize speech even when I'm not speaking, or after I'm done speaking.\nThe recognizer can't recognize speech right after it starts listening for the first time.\nThe recognizer doesn't understand my particular language/dialect.\nThe recognizer hangs on recognizer_instance.listen; specifically, when it's calling Microphone.MicrophoneStream.read.\nCalling Microphone() gives the error IOError: No Default Input Device Available.\nThe program doesn't run when compiled with PyInstaller.\nOn Ubuntu/Debian, I get annoying output in the terminal saying things like \"bt_audio_service_open: [...] Connection refused\" and various others.\nOn OS X, I get a ChildProcessError saying that it couldn't find the system FLAC converter, even though it's installed.\nDeveloping\nTesting\nFLAC Executables\nAuthors\nLicense\n\n\n\n\n\nREADME.rst\n\n\n\n\nSpeechRecognition\n\n\n\n\n\n\n\n\n\nLibrary for performing speech recognition, with support for several engines and APIs, online and offline.\nUPDATE 2022-02-09: Hey everyone! This project started as a tech demo, but these days it needs more time than I have to keep up with all the PRs and issues. Therefore, I'd like to put out an open invite for collaborators - just reach out at me@anthonyz.ca if you're interested!\nSpeech recognition engine/API support:\n\nCMU Sphinx (works offline)\nGoogle Speech Recognition\nGoogle Cloud Speech API\nWit.ai\nMicrosoft Azure Speech\nMicrosoft Bing Voice Recognition (Deprecated)\nHoundify API\nIBM Speech to Text\nSnowboy Hotword Detection (works offline)\nTensorflow\nVosk API (works offline)\nOpenAI whisper (works offline)\nWhisper API\n\nQuickstart: pip install SpeechRecognition. See the \"Installing\" section for more details.\nTo quickly try it out, run python -m speech_recognition after installing.\nProject links:\n\nPyPI\nSource code\nIssue tracker\n\n\nLibrary Reference\nThe library reference documents every publicly accessible object in the library. This document is also included under reference/library-reference.rst.\nSee Notes on using PocketSphinx for information about installing languages, compiling PocketSphinx, and building language packs from online resources. This document is also included under reference/pocketsphinx.rst.\nYou have to install Vosk models for using Vosk. Here are models avaiable. You have to place them in models folder of your project, like \"your-project-folder/models/your-vosk-model\"\n\nExamples\nSee the examples/ directory in the repository root for usage examples:\n\nRecognize speech input from the microphone\nTranscribe an audio file\nSave audio data to an audio file\nShow extended recognition results\nCalibrate the recognizer energy threshold for ambient noise levels (see recognizer_instance.energy_threshold for details)\nListening to a microphone in the background\nVarious other useful recognizer features\n\n\nInstalling\nFirst, make sure you have all the requirements listed in the \"Requirements\" section.\nThe easiest way to install this is using pip install SpeechRecognition.\nOtherwise, download the source distribution from PyPI, and extract the archive.\nIn the folder, run python setup.py install.\n\nRequirements\nTo use all of the functionality of the library, you should have:\n\nPython 3.8+ (required)\nPyAudio 0.2.11+ (required only if you need to use microphone input, Microphone)\nPocketSphinx (required only if you need to use the Sphinx recognizer, recognizer_instance.recognize_sphinx)\nGoogle API Client Library for Python (required only if you need to use the Google Cloud Speech API, recognizer_instance.recognize_google_cloud)\nFLAC encoder (required only if the system is not x86-based Windows/Linux/OS X)\nVosk (required only if you need to use Vosk API speech recognition recognizer_instance.recognize_vosk)\nWhisper (required only if you need to use Whisper recognizer_instance.recognize_whisper)\nopenai (required only if you need to use Whisper API speech recognition recognizer_instance.recognize_whisper_api)\n\nThe following requirements are optional, but can improve or extend functionality in some situations:\n\nIf using CMU Sphinx, you may want to install additional language packs to support languages like International French or Mandarin Chinese.\n\nThe following sections go over the details of each requirement.\n\nPython\nThe first software requirement is Python 3.8+. This is required to use the library.\n\nPyAudio (for microphone users)\nPyAudio is required if and only if you want to use microphone input (Microphone). PyAudio version 0.2.11+ is required, as earlier versions have known memory management bugs when recording from microphones in certain situations.\nIf not installed, everything in the library will still work, except attempting to instantiate a Microphone object will raise an AttributeError.\nThe installation instructions on the PyAudio website are quite good - for convenience, they are summarized below:\n\nOn Windows, install PyAudio using Pip: execute pip install pyaudio in a terminal.\n\nOn Debian-derived Linux distributions (like Ubuntu and Mint), install PyAudio using APT: execute sudo apt-get install python-pyaudio python3-pyaudio in a terminal.\n\nIf the version in the repositories is too old, install the latest release using Pip: execute sudo apt-get install portaudio19-dev python-all-dev python3-all-dev && sudo pip install pyaudio (replace pip with pip3 if using Python 3).\n\n\n\n\nOn OS X, install PortAudio using Homebrew: brew install portaudio. Then, install PyAudio using Pip: pip install pyaudio.\nOn other POSIX-based systems, install the portaudio19-dev and python-all-dev (or python3-all-dev if using Python 3) packages (or their closest equivalents) using a package manager of your choice, and then install PyAudio using Pip: pip install pyaudio (replace pip with pip3 if using Python 3).\n\nPyAudio wheel packages for common 64-bit Python versions on Windows and Linux are included for convenience, under the third-party/ directory in the repository root. To install, simply run pip install wheel followed by pip install ./third-party/WHEEL_FILENAME (replace pip with pip3 if using Python 3) in the repository root directory.\n\nPocketSphinx-Python (for Sphinx users)\nPocketSphinx-Python is required if and only if you want to use the Sphinx recognizer (recognizer_instance.recognize_sphinx).\nPocketSphinx-Python wheel packages for 64-bit Python 3.4, and 3.5 on Windows are included for convenience, under the third-party/ directory. To install, simply run pip install wheel followed by pip install ./third-party/WHEEL_FILENAME (replace pip with pip3 if using Python 3) in the SpeechRecognition folder.\nOn Linux and other POSIX systems (such as OS X), follow the instructions under \"Building PocketSphinx-Python from source\" in Notes on using PocketSphinx for installation instructions.\nNote that the versions available in most package repositories are outdated and will not work with the bundled language data. Using the bundled wheel packages or building from source is recommended.\nSee Notes on using PocketSphinx for information about installing languages, compiling PocketSphinx, and building language packs from online resources. This document is also included under reference/pocketsphinx.rst.\n\nVosk (for Vosk users)\nVosk API is required if and only if you want to use Vosk recognizer (recognizer_instance.recognize_vosk).\nYou can install it with python3 -m pip install vosk.\nYou also have to install Vosk Models:\nHere are models avaiable for download. You have to place them in models folder of your project, like \"your-project-folder/models/your-vosk-model\"\n\nGoogle Cloud Speech Library for Python (for Google Cloud Speech API users)\nGoogle Cloud Speech library for Python is required if and only if you want to use the Google Cloud Speech API (recognizer_instance.recognize_google_cloud).\nIf not installed, everything in the library will still work, except calling recognizer_instance.recognize_google_cloud will raise an RequestError.\nAccording to the official installation instructions, the recommended way to install this is using Pip: execute pip install google-cloud-speech (replace pip with pip3 if using Python 3).\n\nFLAC (for some systems)\nA FLAC encoder is required to encode the audio data to send to the API. If using Windows (x86 or x86-64), OS X (Intel Macs only, OS X 10.6 or higher), or Linux (x86 or x86-64), this is already bundled with this library - you do not need to install anything.\nOtherwise, ensure that you have the flac command line tool, which is often available through the system package manager. For example, this would usually be sudo apt-get install flac on Debian-derivatives, or brew install flac on OS X with Homebrew.\n\nWhisper (for Whisper users)\nWhisper is required if and only if you want to use whisper (recognizer_instance.recognize_whisper).\nYou can install it with python3 -m pip install git+https://github.com/openai/whisper.git soundfile.\n\nWhisper API (for Whisper API users)\nThe library openai is required if and only if you want to use Whisper API (recognizer_instance.recognize_whisper_api).\nIf not installed, everything in the library will still work, except calling recognizer_instance.recognize_whisper_api will raise an RequestError.\nYou can install it with python3 -m pip install openai.\n\nTroubleshooting\n\nThe recognizer tries to recognize speech even when I'm not speaking, or after I'm done speaking.\nTry increasing the recognizer_instance.energy_threshold property. This is basically how sensitive the recognizer is to when recognition should start. Higher values mean that it will be less sensitive, which is useful if you are in a loud room.\nThis value depends entirely on your microphone or audio data. There is no one-size-fits-all value, but good values typically range from 50 to 4000.\nAlso, check on your microphone volume settings. If it is too sensitive, the microphone may be picking up a lot of ambient noise. If it is too insensitive, the microphone may be rejecting speech as just noise.\n\nThe recognizer can't recognize speech right after it starts listening for the first time.\nThe recognizer_instance.energy_threshold property is probably set to a value that is too high to start off with, and then being adjusted lower automatically by dynamic energy threshold adjustment. Before it is at a good level, the energy threshold is so high that speech is just considered ambient noise.\nThe solution is to decrease this threshold, or call recognizer_instance.adjust_for_ambient_noise beforehand, which will set the threshold to a good value automatically.\n\nThe recognizer doesn't understand my particular language/dialect.\nTry setting the recognition language to your language/dialect. To do this, see the documentation for recognizer_instance.recognize_sphinx, recognizer_instance.recognize_google, recognizer_instance.recognize_wit, recognizer_instance.recognize_bing, recognizer_instance.recognize_api, recognizer_instance.recognize_houndify, and recognizer_instance.recognize_ibm.\nFor example, if your language/dialect is British English, it is better to use \"en-GB\" as the language rather than \"en-US\".\n\nThe recognizer hangs on recognizer_instance.listen; specifically, when it's calling Microphone.MicrophoneStream.read.\nThis usually happens when you're using a Raspberry Pi board, which doesn't have audio input capabilities by itself. This causes the default microphone used by PyAudio to simply block when we try to read it. If you happen to be using a Raspberry Pi, you'll need a USB sound card (or USB microphone).\nOnce you do this, change all instances of Microphone() to Microphone(device_index=MICROPHONE_INDEX), where MICROPHONE_INDEX is the hardware-specific index of the microphone.\nTo figure out what the value of MICROPHONE_INDEX should be, run the following code:\nimport speech_recognition as sr\nfor index, name in enumerate(sr.Microphone.list_microphone_names()):\n    print(\"Microphone with name \\\"{1}\\\" found for `Microphone(device_index={0})`\".format(index, name))\nThis will print out something like the following:\nMicrophone with name \"HDA Intel HDMI: 0 (hw:0,3)\" found for `Microphone(device_index=0)`\nMicrophone with name \"HDA Intel HDMI: 1 (hw:0,7)\" found for `Microphone(device_index=1)`\nMicrophone with name \"HDA Intel HDMI: 2 (hw:0,8)\" found for `Microphone(device_index=2)`\nMicrophone with name \"Blue Snowball: USB Audio (hw:1,0)\" found for `Microphone(device_index=3)`\nMicrophone with name \"hdmi\" found for `Microphone(device_index=4)`\nMicrophone with name \"pulse\" found for `Microphone(device_index=5)`\nMicrophone with name \"default\" found for `Microphone(device_index=6)`\n\nNow, to use the Snowball microphone, you would change Microphone() to Microphone(device_index=3).\n\nCalling Microphone() gives the error IOError: No Default Input Device Available.\nAs the error says, the program doesn't know which microphone to use.\nTo proceed, either use Microphone(device_index=MICROPHONE_INDEX, ...) instead of Microphone(...), or set a default microphone in your OS. You can obtain possible values of MICROPHONE_INDEX using the code in the troubleshooting entry right above this one.\n\nThe program doesn't run when compiled with PyInstaller.\nAs of PyInstaller version 3.0, SpeechRecognition is supported out of the box. If you're getting weird issues when compiling your program using PyInstaller, simply update PyInstaller.\nYou can easily do this by running pip install --upgrade pyinstaller.\n\nOn Ubuntu/Debian, I get annoying output in the terminal saying things like \"bt_audio_service_open: [...] Connection refused\" and various others.\nThe \"bt_audio_service_open\" error means that you have a Bluetooth audio device, but as a physical device is not currently connected, we can't actually use it - if you're not using a Bluetooth microphone, then this can be safely ignored. If you are, and audio isn't working, then double check to make sure your microphone is actually connected. There does not seem to be a simple way to disable these messages.\nFor errors of the form \"ALSA lib [...] Unknown PCM\", see this StackOverflow answer. Basically, to get rid of an error of the form \"Unknown PCM cards.pcm.rear\", simply comment out pcm.rear cards.pcm.rear in /usr/share/alsa/alsa.conf, ~/.asoundrc, and /etc/asound.conf.\nFor \"jack server is not running or cannot be started\" or \"connect(2) call to /dev/shm/jack-1000/default/jack_0 failed (err=No such file or directory)\" or \"attempt to connect to server failed\", these are caused by ALSA trying to connect to JACK, and can be safely ignored. I'm not aware of any simple way to turn those messages off at this time, besides entirely disabling printing while starting the microphone.\n\nOn OS X, I get a ChildProcessError saying that it couldn't find the system FLAC converter, even though it's installed.\nInstalling FLAC for OS X directly from the source code will not work, since it doesn't correctly add the executables to the search path.\nInstalling FLAC using Homebrew ensures that the search path is correctly updated. First, ensure you have Homebrew, then run brew install flac to install the necessary files.\n\nDeveloping\nTo hack on this library, first make sure you have all the requirements listed in the \"Requirements\" section.\n\nMost of the library code lives in speech_recognition/__init__.py.\nExamples live under the examples/ directory, and the demo script lives in speech_recognition/__main__.py.\nThe FLAC encoder binaries are in the speech_recognition/ directory.\nDocumentation can be found in the reference/ directory.\nThird-party libraries, utilities, and reference material are in the third-party/ directory.\n\nTo install/reinstall the library locally, run python setup.py install in the project root directory.\nBefore a release, the version number is bumped in README.rst and speech_recognition/__init__.py. Version tags are then created using git config gpg.program gpg2 && git config user.signingkey DB45F6C431DE7C2DCD99FF7904882258A4063489 && git tag -s VERSION_GOES_HERE -m \"Version VERSION_GOES_HERE\".\nReleases are done by running make-release.sh VERSION_GOES_HERE to build the Python source packages, sign them, and upload them to PyPI.\n\nTesting\nTo run all the tests:\npython -m unittest discover --verbose\nTesting is also done automatically by TravisCI, upon every push. To set up the environment for offline/local Travis-like testing on a Debian-like system:\nsudo docker run --volume \"$(pwd):/speech_recognition\" --interactive --tty quay.io/travisci/travis-python:latest /bin/bash\nsu - travis && cd /speech_recognition\nsudo apt-get update && sudo apt-get install swig libpulse-dev\npip install --user pocketsphinx && pip install --user flake8 rstcheck && pip install --user -e .\npython -m unittest discover --verbose # run unit tests\npython -m flake8 --ignore=E501,E701 speech_recognition tests examples setup.py # ignore errors for long lines and multi-statement lines\npython -m rstcheck README.rst reference/*.rst # ensure RST is well-formed\n\nFLAC Executables\nThe included flac-win32 executable is the official FLAC 1.3.2 32-bit Windows binary.\nThe included flac-linux-x86 and flac-linux-x86_64 executables are built from the FLAC 1.3.2 source code with Manylinux to ensure that it's compatible with a wide variety of distributions.\nThe built FLAC executables should be bit-for-bit reproducible. To rebuild them, run the following inside the project directory on a Debian-like system:\n# download and extract the FLAC source code\ncd third-party\nsudo apt-get install --yes docker.io\n\n# build FLAC inside the Manylinux i686 Docker image\ntar xf flac-1.3.2.tar.xz\nsudo docker run --tty --interactive --rm --volume \"$(pwd):/root\" quay.io/pypa/manylinux1_i686:latest bash\n    cd /root/flac-1.3.2\n    ./configure LDFLAGS=-static # compiler flags to make a static build\n    make\nexit\ncp flac-1.3.2/src/flac/flac ../speech_recognition/flac-linux-x86 && sudo rm -rf flac-1.3.2/\n\n# build FLAC inside the Manylinux x86_64 Docker image\ntar xf flac-1.3.2.tar.xz\nsudo docker run --tty --interactive --rm --volume \"$(pwd):/root\" quay.io/pypa/manylinux1_x86_64:latest bash\n    cd /root/flac-1.3.2\n    ./configure LDFLAGS=-static # compiler flags to make a static build\n    make\nexit\ncp flac-1.3.2/src/flac/flac ../speech_recognition/flac-linux-x86_64 && sudo rm -r flac-1.3.2/\nThe included flac-mac executable is extracted from xACT 2.39, which is a frontend for FLAC 1.3.2 that conveniently includes binaries for all of its encoders. Specifically, it is a copy of xACT 2.39/xACT.app/Contents/Resources/flac in xACT2.39.zip.\n\nAuthors\nUberi <me@anthonyz.ca> (Anthony Zhang)\nbobsayshilol\narvindch <achembarpu@gmail.com> (Arvind Chembarpu)\nkevinismith <kevin_i_smith@yahoo.com> (Kevin Smith)\nhaas85\nDelightRun <changxu.mail@gmail.com>\nmaverickagm\nkamushadenes <kamushadenes@hyadesinc.com> (Kamus Hadenes)\nsbraden <braden.sarah@gmail.com> (Sarah Braden)\ntb0hdan (Bohdan Turkynewych)\nThynix <steve@asksteved.com> (Steve Dougherty)\nbeeedy <broderick.carlin@gmail.com> (Broderick Carlin)\n\nPlease report bugs and suggestions at the issue tracker!\nHow to cite this library (APA style):\n\nZhang, A. (2017). Speech Recognition (Version 3.8) [Software]. Available from https://github.com/Uberi/speech_recognition#readme.\nHow to cite this library (Chicago style):\n\nZhang, Anthony. 2017. Speech Recognition (version 3.8).\nAlso check out the Python Baidu Yuyin API, which is based on an older version of this project, and adds support for Baidu Yuyin. Note that Baidu Yuyin is only available inside China.\n\nLicense\nCopyright 2014-2017 Anthony Zhang (Uberi). The source code for this library is available online at GitHub.\nSpeechRecognition is made available under the 3-clause BSD license. See LICENSE.txt in the project's root directory for more information.\nFor convenience, all the official distributions of SpeechRecognition already include a copy of the necessary copyright notices and licenses. In your project, you can simply say that licensing information for SpeechRecognition can be found within the SpeechRecognition README, and make sure SpeechRecognition is visible to users if they wish to see it.\nSpeechRecognition distributes source code, binaries, and language files from CMU Sphinx. These files are BSD-licensed and redistributable as long as copyright notices are correctly retained. See speech_recognition/pocketsphinx-data/*/LICENSE*.txt and third-party/LICENSE-Sphinx.txt for license details for individual parts.\nSpeechRecognition distributes source code and binaries from PyAudio. These files are MIT-licensed and redistributable as long as copyright notices are correctly retained. See third-party/LICENSE-PyAudio.txt for license details.\nSpeechRecognition distributes binaries from FLAC - speech_recognition/flac-win32.exe, speech_recognition/flac-linux-x86, and speech_recognition/flac-mac. These files are GPLv2-licensed and redistributable, as long as the terms of the GPL are satisfied. The FLAC binaries are an aggregate of separate programs, so these GPL restrictions do not apply to the library or your programs that use the library, only to FLAC itself. See LICENSE-FLAC.txt for license details.\n\n\n", "description": "Library for speech recognition with support for APIs like Google Speech Recognition"}, {"name": "spacy", "readme": "\n\nspaCy: Industrial-strength NLP\nspaCy is a library for advanced Natural Language Processing in Python and\nCython. It's built on the very latest research, and was designed from day one to\nbe used in real products.\nspaCy comes with pretrained pipelines and currently\nsupports tokenization and training for 70+ languages. It features\nstate-of-the-art speed and neural network models for tagging, parsing,\nnamed entity recognition, text classification and more, multi-task\nlearning with pretrained transformers like BERT, as well as a\nproduction-ready training system and easy\nmodel packaging, deployment and workflow management. spaCy is commercial\nopen-source software, released under the\nMIT license.\n\ud83d\udcab Version 3.6 out now!\nCheck out the release notes here.\n\n\n\n\n\n\n\n\n\n\n\ud83d\udcd6 Documentation\n\n\n\nDocumentation\n\n\n\n\n\n\u2b50\ufe0f spaCy 101\nNew to spaCy? Here's everything you need to know!\n\n\n\ud83d\udcda Usage Guides\nHow to use spaCy and its features.\n\n\n\ud83d\ude80 New in v3.0\nNew features, backwards incompatibilities and migration guide.\n\n\n\ud83e\ude90 Project Templates\nEnd-to-end workflows you can clone, modify and run.\n\n\n\ud83c\udf9b API Reference\nThe detailed reference for spaCy's API.\n\n\n\ud83d\udce6 Models\nDownload trained pipelines for spaCy.\n\n\n\ud83c\udf0c Universe\nPlugins, extensions, demos and books from the spaCy ecosystem.\n\n\n\u2699\ufe0f spaCy VS Code Extension\nAdditional tooling and features for working with spaCy's config files.\n\n\n\ud83d\udc69\u200d\ud83c\udfeb Online Course\nLearn spaCy in this free and interactive online course.\n\n\n\ud83d\udcfa Videos\nOur YouTube channel with video tutorials, talks and more.\n\n\n\ud83d\udee0 Changelog\nChanges and version history.\n\n\n\ud83d\udc9d Contribute\nHow to contribute to the spaCy project and code base.\n\n\n\nGet a custom spaCy pipeline, tailor-made for your NLP problem by spaCy's core developers. Streamlined, production-ready, predictable and maintainable. Start by completing our 5-minute questionnaire to tell us what you need and we'll be in touch! Learn more \u2192\n\n\n\nBespoke advice for problem solving, strategy and analysis for applied NLP projects. Services include data strategy, code reviews, pipeline design and annotation coaching. Curious? Fill in our 5-minute questionnaire to tell us what you need and we'll be in touch! Learn more \u2192\n\n\n\n\ud83d\udcac Where to ask questions\nThe spaCy project is maintained by the spaCy team.\nPlease understand that we won't be able to provide individual support via email.\nWe also believe that help is much more valuable if it's shared publicly, so that\nmore people can benefit from it.\n\n\n\nType\nPlatforms\n\n\n\n\n\ud83d\udea8 Bug Reports\nGitHub Issue Tracker\n\n\n\ud83c\udf81 Feature Requests & Ideas\nGitHub Discussions\n\n\n\ud83d\udc69\u200d\ud83d\udcbb Usage Questions\nGitHub Discussions \u00b7 Stack Overflow\n\n\n\ud83d\uddef General Discussion\nGitHub Discussions\n\n\n\nFeatures\n\nSupport for 70+ languages\nTrained pipelines for different languages and tasks\nMulti-task learning with pretrained transformers like BERT\nSupport for pretrained word vectors and embeddings\nState-of-the-art speed\nProduction-ready training system\nLinguistically-motivated tokenization\nComponents for named entity recognition, part-of-speech-tagging,\ndependency parsing, sentence segmentation, text classification,\nlemmatization, morphological analysis, entity linking and more\nEasily extensible with custom components and attributes\nSupport for custom models in PyTorch, TensorFlow and other frameworks\nBuilt in visualizers for syntax and NER\nEasy model packaging, deployment and workflow management\nRobust, rigorously evaluated accuracy\n\n\ud83d\udcd6 For more details, see the\nfacts, figures and benchmarks.\n\u23f3 Install spaCy\nFor detailed installation instructions, see the\ndocumentation.\n\nOperating system: macOS / OS X \u00b7 Linux \u00b7 Windows (Cygwin, MinGW, Visual\nStudio)\nPython version: Python 3.6+ (only 64 bit)\nPackage managers: pip \u00b7 conda (via conda-forge)\n\npip\nUsing pip, spaCy releases are available as source packages and binary wheels.\nBefore you install spaCy and its dependencies, make sure that your pip,\nsetuptools and wheel are up to date.\npip install -U pip setuptools wheel\npip install spacy\n\nTo install additional data tables for lemmatization and normalization you can\nrun pip install spacy[lookups] or install\nspacy-lookups-data\nseparately. The lookups package is needed to create blank models with\nlemmatization data, and to lemmatize in languages that don't yet come with\npretrained models and aren't powered by third-party libraries.\nWhen using pip it is generally recommended to install packages in a virtual\nenvironment to avoid modifying system state:\npython -m venv .env\nsource .env/bin/activate\npip install -U pip setuptools wheel\npip install spacy\n\nconda\nYou can also install spaCy from conda via the conda-forge channel. For the\nfeedstock including the build recipe and configuration, check out\nthis repository.\nconda install -c conda-forge spacy\n\nUpdating spaCy\nSome updates to spaCy may require downloading new statistical models. If you're\nrunning spaCy v2.0 or higher, you can use the validate command to check if\nyour installed models are compatible and if not, print details on how to update\nthem:\npip install -U spacy\npython -m spacy validate\n\nIf you've trained your own models, keep in mind that your training and runtime\ninputs must match. After updating spaCy, we recommend retraining your models\nwith the new version.\n\ud83d\udcd6 For details on upgrading from spaCy 2.x to spaCy 3.x, see the\nmigration guide.\n\ud83d\udce6 Download model packages\nTrained pipelines for spaCy can be installed as Python packages. This means\nthat they're a component of your application, just like any other module. Models\ncan be installed using spaCy's download\ncommand, or manually by pointing pip to a path or URL.\n\n\n\nDocumentation\n\n\n\n\n\nAvailable Pipelines\nDetailed pipeline descriptions, accuracy figures and benchmarks.\n\n\nModels Documentation\nDetailed usage and installation instructions.\n\n\nTraining\nHow to train your own pipelines on your data.\n\n\n\n# Download best-matching version of specific model for your spaCy installation\npython -m spacy download en_core_web_sm\n\n# pip install .tar.gz archive or .whl from path or URL\npip install /Users/you/en_core_web_sm-3.0.0.tar.gz\npip install /Users/you/en_core_web_sm-3.0.0-py3-none-any.whl\npip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0.tar.gz\n\nLoading and using models\nTo load a model, use spacy.load()\nwith the model name or a path to the model data directory.\nimport spacy\nnlp = spacy.load(\"en_core_web_sm\")\ndoc = nlp(\"This is a sentence.\")\n\nYou can also import a model directly via its full name and then call its\nload() method with no arguments.\nimport spacy\nimport en_core_web_sm\n\nnlp = en_core_web_sm.load()\ndoc = nlp(\"This is a sentence.\")\n\n\ud83d\udcd6 For more info and examples, check out the\nmodels documentation.\n\u2692 Compile from source\nThe other way to install spaCy is to clone its\nGitHub repository and build it from\nsource. That is the common way if you want to make changes to the code base.\nYou'll need to make sure that you have a development environment consisting of a\nPython distribution including header files, a compiler,\npip,\nvirtualenv and\ngit installed. The compiler part is the trickiest. How to\ndo that depends on your system.\n\n\n\nPlatform\n\n\n\n\n\nUbuntu\nInstall system-level dependencies via apt-get: sudo apt-get install build-essential python-dev git .\n\n\nMac\nInstall a recent version of XCode, including the so-called \"Command Line Tools\". macOS and OS X ship with Python and git preinstalled.\n\n\nWindows\nInstall a version of the Visual C++ Build Tools or Visual Studio Express that matches the version that was used to compile your Python interpreter.\n\n\n\nFor more details and instructions, see the documentation on\ncompiling spaCy from source and the\nquickstart widget to get the right\ncommands for your platform and Python version.\ngit clone https://github.com/explosion/spaCy\ncd spaCy\n\npython -m venv .env\nsource .env/bin/activate\n\n# make sure you are using the latest pip\npython -m pip install -U pip setuptools wheel\n\npip install -r requirements.txt\npip install --no-build-isolation --editable .\n\nTo install with extras:\npip install --no-build-isolation --editable .[lookups,cuda102]\n\n\ud83d\udea6 Run tests\nspaCy comes with an extensive test suite. In order to run the\ntests, you'll usually want to clone the repository and build spaCy from source.\nThis will also install the required development dependencies and test utilities\ndefined in the requirements.txt.\nAlternatively, you can run pytest on the tests from within the installed\nspacy package. Don't forget to also install the test utilities via spaCy's\nrequirements.txt:\npip install -r requirements.txt\npython -m pytest --pyargs spacy\n\n", "description": "Industrial-strength Natural Language Processing in Python.", "category": "Natural language processing"}, {"name": "spacy-legacy", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n"}, {"name": "soupsieve", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nSoup Sieve\nOverview\nInstallation\nDocumentation\nLicense\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\n\n\n\n\n\n\nSoup Sieve\nOverview\nSoup Sieve is a CSS selector library designed to be used with Beautiful Soup 4. It aims to provide selecting,\nmatching, and filtering using modern CSS selectors. Soup Sieve currently provides selectors from the CSS level 1\nspecifications up through the latest CSS level 4 drafts and beyond (though some are not yet implemented).\nSoup Sieve was written with the intent to replace Beautiful Soup's builtin select feature, and as of Beautiful Soup\nversion 4.7.0, it now is \ud83c\udf8a. Soup Sieve can also be imported in order to use its API directly for\nmore controlled, specialized parsing.\nSoup Sieve has implemented most of the CSS selectors up through the latest CSS draft specifications, though there are a\nnumber that don't make sense in a non-browser environment. Selectors that cannot provide meaningful functionality simply\ndo not match anything. Some of the supported selectors are:\n\n.classes\n#ids\n[attributes=value]\nparent child\nparent > child\nsibling ~ sibling\nsibling + sibling\n:not(element.class, element2.class)\n:is(element.class, element2.class)\nparent:has(> child)\nand many more\n\nInstallation\nYou must have Beautiful Soup already installed:\npip install beautifulsoup4\n\nIn most cases, assuming you've installed version 4.7.0, that should be all you need to do, but if you've installed via\nsome alternative method, and Soup Sieve is not automatically installed, you can install it directly:\npip install soupsieve\n\nIf you want to manually install it from source, first ensure that build is\ninstalled:\npip install build\n\nThen navigate to the root of the project and build the wheel and install (replacing <ver> with the current version):\npython -m build -w\npip install dist/soupsive-<ver>-py3-none-any.whl\n\nDocumentation\nDocumentation is found here: https://facelessuser.github.io/soupsieve/.\nLicense\nMIT\n\n\n", "description": "CSS selector library for BeautifulSoup"}, {"name": "SoundFile", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npython-soundfile\nBreaking Changes\nInstallation\nBuilding\nError Reporting\nRead/Write Functions\nBlock Processing\nSoundFile Objects\nRAW Files\nVirtual IO\nKnown Issues\nNews\n\n\n\n\n\nREADME.rst\n\n\n\n\npython-soundfile\n\n\n\n\n\n\n\n\nThe soundfile module is an audio\nlibrary based on libsndfile, CFFI and NumPy. Full documentation is\navailable on https://python-soundfile.readthedocs.io/.\nThe soundfile module can read and write sound files. File reading/writing is\nsupported through libsndfile,\nwhich is a free, cross-platform, open-source (LGPL) library for reading\nand writing many different sampled sound file formats that runs on many\nplatforms including Windows, OS X, and Unix. It is accessed through\nCFFI, which is a foreign function\ninterface for Python calling C code. CFFI is supported for CPython 2.6+,\n3.x and PyPy 2.0+. The soundfile module represents audio data as NumPy arrays.\n\npython-soundfile is BSD licensed (BSD 3-Clause License).\n(c) 2013, Bastian Bechtold\n\n\n\n\n\n\n\nBreaking Changes\nThe soundfile module has evolved rapidly in the past. Most\nnotably, we changed the import name from import pysoundfile to\nimport soundfile in 0.7. In 0.6, we cleaned up many small\ninconsistencies, particularly in the the ordering and naming of\nfunction arguments and the removal of the indexing interface.\nIn 0.8.0, we changed the default value of always_2d from True\nto False. Also, the order of arguments of the write function\nchanged from write(data, file, ...) to write(file, data, ...).\nIn 0.9.0, we changed the ctype arguments of the buffer_*\nmethods to dtype, using the Numpy dtype notation. The old\nctype arguments still work, but are now officially deprecated.\nIn 0.12.0, we changed the load order of the libsndfile library. Now,\nthe packaged libsndfile in the platform-specific wheels is tried\nbefore falling back to any system-provided libsndfile. If you would\nprefer using the system-provided libsndfile, install the source\npackage or source wheel instead of the platform-specific wheels.\n\nInstallation\nThe soundfile module depends on the Python packages CFFI and NumPy, and the\nlibrary libsndfile.\nIn a modern Python, you can use pip install soundfile to download\nand install the latest release of the soundfile module and its\ndependencies. On Windows (64/32) and OS X (Intel/ARM) and Linux 64,\nthis will also install a current version of the library libsndfile. If\nyou install the source module, you need to install libsndfile using\nyour distribution's package manager, for example sudo apt install\nlibsndfile1.\nIf you are running on an unusual platform or if you are using an older\nversion of Python, you might need to install NumPy and CFFI separately,\nfor example using the Anaconda package manager or the Unofficial Windows\nBinaries for Python Extension Packages.\n\nBuilding\nSoundfile itself does not contain any compiled code and can be\nbundled into a wheel with the usual python setup.py bdist_wheel.\nHowever, soundfile relies on libsndfile, and optionally ships its\nown copy of libsndfile in the wheel.\nTo build a binary wheel that contains libsndfile, make sure to\ncheckout and update the _soundfile_data submodule, then run\npython setup.py bdist_wheel as usual. If the resulting file size\nof the wheel is around one megabyte, a matching libsndfile has been\nbundled (without libsndfile, it's around 25 KB).\nTo build binary wheels for all supported platforms, run python\nbuild_wheels.py, which will python setup.py bdist_wheel for each\nof the platforms we have precompiled libsndfiles for.\n\nError Reporting\nIn case of API usage errors the soundfile module raises the usual ValueError or TypeError.\nFor other errors SoundFileError is raised (used to be RuntimeError).\nParticularly, a LibsndfileError subclass of this exception is raised on\nerrors reported by the libsndfile library. In that case the exception object\nprovides the libsndfile internal error code in the LibsndfileError.code attribute and the raw\nlibsndfile error message in the LibsndfileError.error_string attribute.\n\nRead/Write Functions\nData can be written to the file using soundfile.write(), or read from\nthe file using soundfile.read(). The soundfile module can open all file formats\nthat libsndfile supports, for example WAV,\nFLAC, OGG and MAT files (see Known Issues below about writing OGG files).\nHere is an example for a program that reads a wave file and copies it\ninto an FLAC file:\nimport soundfile as sf\n\ndata, samplerate = sf.read('existing_file.wav')\nsf.write('new_file.flac', data, samplerate)\n\nBlock Processing\nSound files can also be read in short, optionally overlapping blocks\nwith soundfile.blocks().\nFor example, this calculates the signal level for each block of a long\nfile:\nimport numpy as np\nimport soundfile as sf\n\nrms = [np.sqrt(np.mean(block**2)) for block in\n       sf.blocks('myfile.wav', blocksize=1024, overlap=512)]\n\nSoundFile Objects\nSound files can also be opened as SoundFile objects. Every\nSoundFile has a specific sample rate, data format and a set number of\nchannels.\nIf a file is opened, it is kept open for as long as the SoundFile\nobject exists. The file closes when the object is garbage collected,\nbut you should use the SoundFile.close() method or the\ncontext manager to close the file explicitly:\nimport soundfile as sf\n\nwith sf.SoundFile('myfile.wav', 'r+') as f:\n    while f.tell() < f.frames:\n        pos = f.tell()\n        data = f.read(1024)\n        f.seek(pos)\n        f.write(data*2)\nAll data access uses frames as index. A frame is one discrete time-step\nin the sound file. Every frame contains as many samples as there are\nchannels in the file.\n\nRAW Files\nsoundfile.read() can usually auto-detect the file type of sound files. This\nis not possible for RAW files, though:\nimport soundfile as sf\n\ndata, samplerate = sf.read('myfile.raw', channels=1, samplerate=44100,\n                           subtype='FLOAT')\nNote that on x86, this defaults to endian='LITTLE'. If you are\nreading big endian data (mostly old PowerPC/6800-based files), you\nhave to set endian='BIG' accordingly.\nYou can write RAW files in a similar way, but be advised that in most\ncases, a more expressive format is better and should be used instead.\n\nVirtual IO\nIf you have an open file-like object, soundfile.read() can open it just like\nregular files:\nimport soundfile as sf\nwith open('filename.flac', 'rb') as f:\n    data, samplerate = sf.read(f)\nHere is an example using an HTTP request:\nimport io\nimport soundfile as sf\nfrom urllib.request import urlopen\n\nurl = \"http://tinyurl.com/shepard-risset\"\ndata, samplerate = sf.read(io.BytesIO(urlopen(url).read()))\nNote that the above example only works with Python 3.x.\nFor Python 2.x support, replace the third line with:\nfrom urllib2 import urlopen\n\nKnown Issues\nWriting to OGG files can result in empty files with certain versions of libsndfile. See #130 for news on this issue.\nIf using a Buildroot style system, Python has trouble locating libsndfile.so file, which causes python-soundfile to not be loaded. This is apparently a bug in python. For the time being, in soundfile.py, you can remove the call to _find_library and hardcode the location of the libsndfile.so in _ffi.dlopen. See #258 for discussion on this issue.\n\nNews\n\n2013-08-27 V0.1.0 Bastian Bechtold:\nInitial prototype. A simple wrapper for libsndfile in Python\n2013-08-30 V0.2.0 Bastian Bechtold:\nBugfixes and more consistency with PySoundCard\n2013-08-30 V0.2.1 Bastian Bechtold:\nBugfixes\n2013-09-27 V0.3.0 Bastian Bechtold:\nAdded binary installer for Windows, and context manager\n2013-11-06 V0.3.1 Bastian Bechtold:\nSwitched from distutils to setuptools for easier installation\n2013-11-29 V0.4.0 Bastian Bechtold:\nThanks to David Blewett, now with Virtual IO!\n2013-12-08 V0.4.1 Bastian Bechtold:\nThanks to Xidorn Quan, FLAC files are not float32 any more.\n2014-02-26 V0.5.0 Bastian Bechtold:\nThanks to Matthias Geier, improved seeking and a flush() method.\n2015-01-19 V0.6.0 Bastian Bechtold:\nA big, big thank you to Matthias Geier, who did most of the work!\n\nSwitched to float64 as default data type.\nFunction arguments changed for consistency.\nAdded unit tests.\nAdded global read(), write(), blocks() convenience\nfunctions.\nDocumentation overhaul and hosting on readthedocs.\nAdded 'x' open mode.\nAdded tell() method.\nAdded __repr__() method.\n\n\n2015-04-12 V0.7.0 Bastian Bechtold:\nAgain, thanks to Matthias Geier for all of his hard work, but also\nNils Werner and Whistler7 for their many suggestions and help.\n\nRenamed import pysoundfile to import soundfile.\nInstallation through pip wheels that contain the necessary\nlibraries for OS X and Windows.\nRemoved exclusive_creation argument to write().\nAdded truncate() method.\n\n\n2015-10-20 V0.8.0 Bastian Bechtold:\nAgain, Matthias Geier contributed a whole lot of hard work to this\nrelease.\n\nChanged the default value of always_2d from True to\nFalse.\nNumpy is now optional, and only loaded for read and\nwrite.\nAdded SoundFile.buffer_read() and\nSoundFile.buffer_read_into() and SoundFile.buffer_write(),\nwhich read/write raw data without involving Numpy.\nAdded info() function that returns metadata of a sound file.\nChanged the argument order of the write() function from\nwrite(data, file, ...) to write(file, data, ...)\n\nAnd many more minor bug fixes.\n\n2017-02-02 V0.9.0 Bastian Bechtold:\nThank you, Matthias Geier, Tomas Garcia, and Todd, for contributions\nfor this release.\n\nAdds support for ALAC files.\nAdds new member __libsndfile_version__\nAdds number of frames to info class\nAdds dtype argument to buffer_* methods\nDeprecates ctype argument to buffer_* methods\nAdds official support for Python 3.6\n\nAnd some minor bug fixes.\n\n2017-11-12 V0.10.0 Bastian Bechtold:\nThank you, Matthias Geier, Toni Barth, Jon Peirce, Till Hoffmann,\nand Tomas Garcia, for contributions to this release.\n\nShould now work with cx_freeze.\nSeveral documentation fixes in the README.\nRemoves deprecated ctype argument in favor of dtype in buffer_*().\nAdds SoundFile.frames in favor of now-deprecated __len__().\nImproves performance of blocks() and SoundFile.blocks().\nImproves import time by using CFFI's out of line mode.\nAdds a build script for building distributions.\n\n\n2022-06-02 V0.11.0 Bastian Bechtold:\nThank you, tennies, Hannes Helmholz, Christoph Boeddeker, Matt\nVollrath, Matthias Geier, Jacek Konieczny, Boris Verkhovskiy,\nJonas Haag, Eduardo Moguillansky, Panos Laganakos, Jarvy Jarvison,\nDomingo Ramirez, Tim Chagnon, Kyle Benesch, Fabian-Robert St\u00f6ter,\nJoe Todd\n\nMP3 support\nAdds binary wheels for macOS M1\nImproves compatibility with macOS, specifically for M1 machines\nFixes file descriptor open for binary wheels on Windows and Python 3.5+\nUpdates libsndfile to v1.1.0\nAdds get_strings method for retrieving all metadata at once\nImproves documentation, error messages and tests\nDisplays length of very short files in samples\nSupports the file system path protocol (pathlib et al)\n\n\n2023-02-02 V0.12.0 Bastian Bechtold\nThank you, Barabazs, Andrew Murray, Jon Peirce, for contributions\nto this release.\n\nUpdated libsndfile to v1.2.0\nImproves precompiled library location, especially with py2app or cx-freeze.\nNow provide binary wheels for Linux x86_64\nNow prefers packaged libsndfile over system-installed libsndfile\n\n\n2023-02-15 V0.12.1 Bastian Bechtold\nThank you, funnypig, for the bug report\n\nFixed typo on library location detection if no packaged lib and\nno system lib was found\n\n\n\n\n\n", "description": "Reads and writes sound files"}, {"name": "sortedcontainers", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nFeatures\nInstallation\nQuickstart\nQuestions?\nContributing\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nZipline is a Pythonic algorithmic trading library. It is an event-driven\nsystem for backtesting. Zipline is currently used in production as the backtesting and live-trading\nengine powering Quantopian -- a free,\ncommunity-centered, hosted platform for building and executing trading\nstrategies. Quantopian also offers a fully managed service for professionals\nthat includes Zipline, Alphalens, Pyfolio, FactSet data, and more.\n\nJoin our Community!\nDocumentation\nWant to Contribute? See our Development Guidelines\n\n\nFeatures\n\nEase of Use: Zipline tries to get out of your way so that you can\nfocus on algorithm development. See below for a code example.\n\"Batteries Included\": many common statistics like\nmoving average and linear regression can be readily accessed from\nwithin a user-written algorithm.\nPyData Integration: Input of historical data and output of performance statistics are\nbased on Pandas DataFrames to integrate nicely into the existing\nPyData ecosystem.\nStatistics and Machine Learning Libraries: You can use libraries like matplotlib, scipy,\nstatsmodels, and sklearn to support development, analysis, and\nvisualization of state-of-the-art trading systems.\n\n\nInstallation\nZipline currently supports Python 2.7, 3.5, and 3.6, and may be installed via\neither pip or conda.\nNote: Installing Zipline is slightly more involved than the average Python\npackage. See the full Zipline Install Documentation for detailed\ninstructions.\nFor a development installation (used to develop Zipline itself), create and\nactivate a virtualenv, then run the etc/dev-install script.\n\nQuickstart\nSee our getting started tutorial.\nThe following code implements a simple dual moving average algorithm.\nfrom zipline.api import order_target, record, symbol\n\ndef initialize(context):\n    context.i = 0\n    context.asset = symbol('AAPL')\n\n\ndef handle_data(context, data):\n    # Skip first 300 days to get full windows\n    context.i += 1\n    if context.i < 300:\n        return\n\n    # Compute averages\n    # data.history() has to be called with the same params\n    # from above and returns a pandas dataframe.\n    short_mavg = data.history(context.asset, 'price', bar_count=100, frequency=\"1d\").mean()\n    long_mavg = data.history(context.asset, 'price', bar_count=300, frequency=\"1d\").mean()\n\n    # Trading logic\n    if short_mavg > long_mavg:\n        # order_target orders as many shares as needed to\n        # achieve the desired number of shares.\n        order_target(context.asset, 100)\n    elif short_mavg < long_mavg:\n        order_target(context.asset, 0)\n\n    # Save values for later inspection\n    record(AAPL=data.current(context.asset, 'price'),\n           short_mavg=short_mavg,\n           long_mavg=long_mavg)\nYou can then run this algorithm using the Zipline CLI.\nFirst, you must download some sample pricing and asset data:\n$ zipline ingest\n$ zipline run -f dual_moving_average.py --start 2014-1-1 --end 2018-1-1 -o dma.pickle --no-benchmark\nThis will download asset pricing data data sourced from Quandl, and stream it through the algorithm over the specified time range.\nThen, the resulting performance DataFrame is saved in dma.pickle, which you can load and analyze from within Python.\nYou can find other examples in the zipline/examples directory.\n\nQuestions?\nIf you find a bug, feel free to open an issue and fill out the issue template.\n\nContributing\nAll contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome. Details on how to set up a development environment can be found in our development guidelines.\nIf you are looking to start working with the Zipline codebase, navigate to the GitHub issues tab and start looking through interesting issues. Sometimes there are issues labeled as Beginner Friendly or Help Wanted.\nFeel free to ask questions on the mailing list or on Gitter.\n\nNote\nPlease note that Zipline is not a community-led project. Zipline is\nmaintained by the Quantopian engineering team, and we are quite small and\noften busy.\nBecause of this, we want to warn you that we may not attend to your pull\nrequest, issue, or direct mention in months, or even years. We hope you\nunderstand, and we hope that this note might help reduce any frustration or\nwasted time.\n\n\n\n", "description": "Fast, pure-Python implementation of sorted collections."}, {"name": "snuggs", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nsnuggs\nSyntax\nExamples\nAddition of two numbers\nMultiplication of a number and an array\nEvaluation context\nFunctions and operators\nHigher-order functions\nPerformance notes\n\n\n\n\n\nREADME.rst\n\n\n\n\nsnuggs\n\n\n\nSnuggs are s-expressions for Numpy\n>>> snuggs.eval(\"(+ (asarray 1 1) (asarray 2 2))\")\narray([3, 3])\n\nSyntax\nSnuggs wraps Numpy in expressions with the following syntax:\nexpression = \"(\" (operator | function) *arg \")\"\narg = expression | name | number | string\n\n\nExamples\n\nAddition of two numbers\nimport snuggs\nsnuggs.eval('(+ 1 2)')\n# 3\n\nMultiplication of a number and an array\nArrays can be created using asarray.\nsnuggs.eval(\"(* 3.5 (asarray 1 1))\")\n# array([ 3.5,  3.5])\n\nEvaluation context\nExpressions can also refer by name to arrays in a local context.\nsnuggs.eval(\"(+ (asarray 1 1) b)\", b=np.array([2, 2]))\n# array([3, 3])\nThis local context may be provided using keyword arguments (e.g.,\nb=np.array([2, 2])), or by passing a dictionary that stores\nthe keys and associated array values. Passing a dictionary, specifically\nan OrderedDict, is important when using a function or operator that\nreferences the order in which values have been provided. For example,\nthe read function will lookup the i-th value passed:\nctx = OrderedDict((\n    ('a', np.array([5, 5])),\n    ('b', np.array([2, 2]))\n))\nsnuggs.eval(\"(- (read 1) (read 2))\", ctx)\n# array([3, 3])\n\nFunctions and operators\nArithmetic (* + / -) and logical (< <= == != >= > & |) operators are\navailable. Members of the numpy module such as asarray(), mean(),\nand where() are also available.\nsnuggs.eval(\"(mean (asarray 1 2 4))\")\n# 2.3333333333333335\nsnuggs.eval(\"(where (& tt tf) 1 0)\",\n    tt=numpy.array([True, True]),\n    tf=numpy.array([True, False]))\n# array([1, 0])\n\nHigher-order functions\nNew in snuggs 1.1 are higher-order functions map and partial.\nsnuggs.eval(\"((partial * 2) 2)\")\n# 4\n\nsnuggs.eval('(asarray (map (partial * 2) (asarray 1 2 3)))')\n# array([2, 4, 6])\n\nPerformance notes\nSnuggs makes simple calculator programs possible. None of the optimizations\nof, e.g., numexpr (multithreading,\nelimination of temporary data, etc) are currently available.\nIf you're looking to combine Numpy with a more complete Lisp, see\nHy:\n=> (import numpy)\n=> (* 2 (.asarray numpy [1 2 3]))\narray([2, 4, 6])\n\n\n", "description": "S-expressions for Numpy"}, {"name": "snowflake-connector-python", "readme": "\nThis package includes the Snowflake Connector for Python, which conforms to the Python DB API 2.0 specification:\nhttps://www.python.org/dev/peps/pep-0249/\nSnowflake Documentation is available at:\nhttps://docs.snowflake.com/\nSource code is also available at: https://github.com/snowflakedb/snowflake-connector-python\nRelease Notes\n\n\nv3.1.1(August 28,2023)\n\nFixed a bug in retry logic for okta authentication to refresh token.\nSupport RSAPublicKey when constructing AuthByKeyPair in addition to raw bytes.\nFixed a bug when connecting through SOCKS5 proxy, the attribute proxy_header is missing on SOCKSProxyManager.\nCherry-picked https://github.com/urllib3/urllib3/commit/fd2759aa16b12b33298900c77d29b3813c6582de onto vendored urllib3 (v1.26.15) to enable enforce_content_length by default.\nFixed a bug in tag generation of OOB telemetry event.\n\n\n\nv3.1.0(July 31,2023)\n\n\nAdded a feature that lets you add connection definitions to the connections.toml configuration file. A connection definition refers to a collection of connection parameters, for example, if you wanted to define a connection named `prod``:\n[prod]\naccount = \"my_account\"\nuser = \"my_user\"\npassword = \"my_password\"\n\nBy default, we look for the connections.toml file in the location specified in the SNOWFLAKE_HOME environment variable (default: ~/.snowflake). If this folder does not exist, the Python connector looks for the file in the platformdirs location, as follows:\n\nOn Linux: ~/.config/snowflake/,  but follows XDG settings\nOn Mac: ~/Library/Application Support/snowflake/\nOn Windows: %USERPROFILE%\\AppData\\Local\\snowflake\\\n\nYou can determine which file is used by running the following command:\npython -c \"from snowflake.connector.constants import CONNECTIONS_FILE; print(str(CONNECTIONS_FILE))\"\n\n\n\nBumped cryptography dependency from <41.0.0,>=3.1.0 to >=3.1.0,<42.0.0.\n\n\nImproved OCSP response caching to remove tmp cache files on Windows.\n\n\nImproved OCSP response caching to reduce the times of disk writing.\n\n\nAdded a parameter server_session_keep_alive in SnowflakeConnection that skips session deletion when client connection closes.\n\n\nTightened our pinning of platformdirs, to prevent their new releases breaking us.\n\n\nFixed a bug where SFPlatformDirs would incorrectly append application_name/version to its path.\n\n\nAdded retry reason for queries that are retried by the client.\n\n\nFixed a bug where write_pandas fails when user does not have the privilege to create stage or file format in the target schema, but has the right privilege for the current schema.\n\n\nRemove Python 3.7 support.\n\n\nWorked around a segfault which sometimes occurred during cache serialization in multi-threaded scenarios.\n\n\nImproved error handling of connection reset error.\n\n\nFixed a bug about deleting the temporary files happened when running PUT command.\n\n\nAllowed to pass type_mapper to fetch_pandas_batches() and fetch_pandas_all().\n\n\nFixed a bug where pickle.dump segfaults during cache serialization in multi-threaded scenarios.\n\n\nImproved retry logic for okta authentication to refresh token if authentication gets throttled.\n\n\nNote that this release does not include the changes introduced in the previous 3.1.0a1 release. Those will be released at a later time.\n\n\n\n\nv3.0.4(May 23,2023)\n\nFixed a bug in which cursor.execute() could modify the argument statement_params dictionary object when executing a multistatement query.\nAdded the json_result_force_utf8_decoding connection parameter to force decoding JSON content in utf-8 when the result format is JSON.\nFixed a bug in which we cannot call SnowflakeCursor.nextset before fetching the result of the first query if the cursor runs an async multistatement query.\nBumped vendored library urllib3 to 1.26.15\nBumped vendored library requests to 2.29.0\nFixed a bug when _prefetch_hook() was not called before yielding results of execute_async().\nFixed a bug where some ResultMetadata fields were marked as required when they were optional.\nBumped pandas dependency from <1.6.0,>=1.0.0 to >=1.0.0,<2.1.0\nFixed a bug where bulk insert converts date incorrectly.\nAdd support for Geometry types.\n\n\n\nv3.0.3(April 20, 2023)\n\nFixed a bug that prints error in logs for GET command on GCS.\nAdded a parameter that allows users to skip file uploads to stage if file exists on stage and contents of the file match.\nFixed a bug that occurred when writing a Pandas DataFrame with non-default index in snowflake.connector.pandas_tool.write_pandas.\nFixed a bug that occurred when writing a Pandas DataFrame with column names containing double quotes in snowflake.connector.pandas_tool.write_pandas.\nFixed a bug that occurred when writing a Pandas DataFrame with binary data in snowflake.connector.pandas_tool.write_pandas.\nImproved type hint of SnowflakeCursor.execute method.\nFail instantly upon receiving 403: Forbidden HTTP response for a login-request.\nImproved GET logging to warn when downloading multiple files with the same name.\n\n\n\nv3.0.2(March 23, 2023)\n\nFixed a memory leak in the logging module of the Cython extension.\nFixed a bug where the put command on AWS raised AttributeError when uploading file composed of multiple parts.\nFixed a bug of incorrect type hints of SnowflakeCursor.fetch_arrow_all and SnowflakeCursor.fetchall.\nFixed a bug where snowflake.connector.util_text.split_statements swallows the final line break in the case when there are no space between lines.\nImproved logging to mask tokens in case of errors.\nValidate SSO URL before opening it in the browser for External browser authenticator.\n\n\n\nv3.0.1(February 28, 2023)\n\nImproved the robustness of OCSP response caching to handle errors in cases of serialization and deserialization.\nUpdated async_executes method's doc-string.\nErrors raised now have a query field that contains the SQL query that caused them when available.\nFixed a bug where MFA token caching would refuse to work until restarted instead of reauthenticating.\nReplaced the dependency on setuptools in favor of packaging.\nFixed a bug where AuthByKeyPair.handle_timeout should pass keyword arguments instead of positional arguments when calling AuthByKeyPair.prepare.\n\n\n\nv3.0.0(January 26, 2023)\n\nFixed a bug where write_pandas did not use user-specified schema and database to create intermediate objects\nFixed a bug where HTTP response code of 429 were not retried\nFixed a bug where MFA token caching was not working\nBumped pyarrow dependency from >=8.0.0,<8.1.0 to >=10.0.1,<10.1.0\nBumped pyOpenSSL dependency from <23.0.0 to <24.0.0\nDuring browser-based authentication, the SSO url is now printed before opening it in the browser\nIncreased the level of a log for when ArrowResult cannot be imported\nAdded a minimum MacOS version check when compiling C-extensions\nEnabled fetch_arrow_all and fetch_arrow_batches to handle async query results\n\n\n\nv2.9.0(December 9, 2022)\n\nFixed a bug where the permission of the file downloaded via GET command is changed\nReworked authentication internals to allow users to plug custom key-pair authenticators\nMulti-statement query execution is now supported through cursor.execute and cursor.executemany\n\nThe Snowflake parameter MULTI_STATEMENT_COUNT can be altered at the account, session, or statement level. An additional argument, num_statements, can be provided to execute to use this parameter at the statement level. It must be provided to executemany to submit a multi-statement query through the method. Note that bulk insert optimizations available through executemany are not available when submitting multi-statement queries.\n\nBy default the parameter is 1, meaning only a single query can be submitted at a time\nSet to 0 to submit any number of statements in a multi-statement query\nSet to >1 to submit the specified exact number of statements in a multi-statement query\n\n\nBindings are accepted in the same way for multi-statements as they are for single statement queries\nAsynchronous multi-statement query execution is supported. Users should still use get_results_from_sfqid to retrieve results\nTo access the results of each query, users can call SnowflakeCursor.nextset() as specified in the DB 2.0 API (PEP-249), to iterate through each statements results\n\nThe first statement's results are accessible immediately after calling execute (or get_results_from_sfqid if asynchronous) through the existing fetch*() methods\n\n\n\n\n\n\n\nv2.8.3(November 28,2022)\n\nBumped cryptography dependency from <39.0.0 to <41.0.0\nFixed a bug where expired OCSP response cache caused infinite recursion during cache loading\n\n\n\nv2.8.2(November 18,2022)\n\nImproved performance of OCSP response caching\nDuring the execution of GET commands we no longer resolve target location on the local machine\nImproved performance of regexes used for PUT/GET SQL statement detection. CVE-2022-42965\n\n\n\nv2.8.1(October 30,2022)\n\nBumped cryptography dependency from <37.0.0 to <39.0.0\nBumped pandas dependency from <1.5.0 to <1.6.0\nFixed a bug where write_pandas wouldn't write an empty DataFrame to Snowflake\nWhen closing connection async query status checking is now parallelized\nFixed a bug where test logging would be enabled on Jenkins workers in non-Snowflake Jenkins machines\nEnhanced the atomicity of write_pandas when overwrite is set to True\n\n\n\nv2.8.0(September 27,2022)\n\nFixed a bug where rowcount was deleted when the cursor was closed\nFixed a bug where extTypeName was used even when it was empty\nUpdated how telemetry entries are constructed\nAdded telemetry for imported root packages during run-time\nAdded telemetry for using write_pandas\nFixed missing dtypes when calling fetch_pandas_all() on empty result\nThe write_pandas function now supports providing additional arguments to be used by DataFrame.to_parquet\nAll optional parameters of write_pandas can now be provided to pd_writer and make_pd_writer to be used with DataFrame.to_sql\n\n\n\nv2.7.12(August 26,2022)\n\nFixed a bug where timestamps fetched as pandas.DataFrame or pyarrow.Table would overflow for the sake of unnecessary precision. In the case where an overflow cannot be prevented a clear error will be raised now.\nAdded in-file caching for OCSP response caching\nThe write_pandas function now supports transient tables through the new table_type argument which supersedes create_temp_table argument\nFixed a bug where calling fetch_pandas_batches incorrectly raised NotSupportedError after an async query was executed\nAdded support for OKTA Identity Engine\n\n\n\nv2.7.11(July 26,2022)\n\nAdded minimum version pin to typing_extensions\n\n\n\nv2.7.10(July 22,2022)\n\nRelease wheels are now built on manylinux2014\nBumped supported pyarrow version to >=8.0.0,<8.1.0\nUpdated vendored library versions requests to 2.28.1 and urllib3 to 1.26.10\nAdded in-memory cache to OCSP requests\nAdded overwrite option to write_pandas\nAdded attribute lastrowid to SnowflakeCursor in compliance with PEP249.\nFixed a bug where gzip compressed http requests might be garbled by an unflushed buffer\nAdded new connection diagnostics capabilities to snowflake-connector-python\nBumped numpy dependency from <1.23.0 to <1.24.0\n\n\n\nv2.7.9(June 26,2022)\n\nFixed a bug where errors raised during get_results_from_sfqid() were missing errno\nFixed a bug where empty results containing GEOGRAPHY type raised IndexError\n\n\n\nv2.7.8(May 28,2022)\n\nUpdated PyPi documentation link to python specific main page\nFixed an error message that appears when pandas optional dependency group is required but is not installed\nImplemented the DB API 2 callproc() method\nFixed a bug where decryption took place before decompression when downloading files from stages\nFixed a bug where s3 accelerate configuration was handled incorrectly\nExtra named arguments given executemany() are now forwarded to execute()\nAutomatically sets the application name to streamlit when streamlit is imported and application name was not explicitly set\nBumped pyopenssl dependency version to >=16.2.0,<23.0.0\n\n\n\nv2.7.7(April 30,2022)\n\nBumped supported pandas version to < 1.5.0\nFixed a bug where partner name (from SF_PARTNER environmental variable) was set after connection was established\nAdded a new _no_retry option to executing queries\nFixed a bug where extreme timestamps lost precision\n\n\n\nv2.7.6(March 17,2022)\n\nFixed missing python_requires tag in setup.cfg\n\n\n\nv2.7.5(March 17,2022)\n\nAdded an option for partners to inject their name through an environmental variable (SF_PARTNER)\nFixed a bug where we would not wait for input if a browser window couldn't be opened for SSO login\nDeprecate support for Python 3.6\nExported a type definition for SnowflakeConnection\nFixed a bug where final Arrow table would contain duplicate index numbers when using fetch_pandas_all\n\n\n\nv2.7.4(February 05,2022)\n\nAdd Geography Types\nRemoving automated incident reporting code\nFixed a bug where circular reference would prevent garbage collection on some objects\nFixed a bug where DatabaseError was thrown when executing against a closed cursor instead of InterfaceError\nFixed a bug where calling executemany would crash if an iterator was supplied as args\nFixed a bug where violating NOT NULL constraint raised DatabaseError instead of IntegrityError\n\n\n\nv2.7.3(January 22,2022)\n\nFixed a bug where timezone was missing from retrieved Timestamp_TZ columns\nFixed a bug where a long running PUT/GET command could hit a Storage Credential Error while renewing credentials\nFixed a bug where py.typed was not being included in our release wheels\nFixed a bug where negative numbers were mangled when fetched with the connection parameter arrow_number_to_decimal\nImproved the error message that is encountered when running GET for a non-existing file\nFixed rendering of our long description for PyPi\nFixed a bug where DUO authentication ran into errors if sms authentication was disabled for the user\nAdd the ability to auto-create a table when writing a pandas DataFrame to a Snowflake table\nBumped the maximum dependency version of numpy from <1.22.0 to <1.23.0\n\n\n\nv2.7.2(December 17,2021)\n\nAdded support for Python version 3.10.\nFixed an issue bug where _get_query_status failed if there was a network error.\nAdded the interpolate_empty_sequences connection parameter to control interpolating empty sequences into queries.\nFixed an issue where where BLOCKED was considered to be an error by is_an_error.\nAdded source field to Telemetry.\nIncreased the cryptography dependency version.\nIncreased the pyopenssl dependency version.\nFixed an issue where dbapi.Binary returned a string instead of bytes.\nIncreased the required version of numpy.\nIncreased the required version of keyring.\nFixed issue so that fetch functions now return a typed DataFrames and pyarrow Tables for empty results.\nAdded py.typed\nImproved error messages for PUT/GET.\nAdded Cursor.query attribute for accessing last query.\nIncreased the required version of pyarrow.\n\n\n\nv2.7.1(November 19,2021)\n\nFixed a bug where uploading a streaming file with multiple parts did not work.\nJWT tokens are now regenerated when a request is retired.\nUpdated URL escaping when uploading to AWS S3 to match how S3 escapes URLs.\nRemoved the unused s3_connection_pool_size connection parameter.\nBlocked queries are now be considered to be still running.\nSnowflake specific exceptions are now set using Exception arguments.\nFixed an issue where use_s3_regional_url was not set correctly by the connector.\n\n\n\nv2.7.0(October 25,2021)\n\nRemoving cloud sdks.snowflake-connector-python will not install them anymore. Recreate your virtualenv to get rid of unnecessary dependencies.\nInclude Standard C++ headers.\nUpdate minimum dependency version pin of cryptography.\nFixed a bug where error number would not be added to Exception messages.\nFixed a bug where client_prefetch_threads parameter was not respected when pre-fetching results.\nUpdate signature of SnowflakeCursor.execute's params argument.\n\n\n\nv2.6.2(September 27,2021)\n\nUpdated vendored urllib3 and requests versions.\nFixed a bug where GET commands would fail to download files from sub directories from stages.\nAdded a feature where where the connector will print the url it tried to open when it is unable to open it for external browser authentication.\n\n\n\nv2.6.1(September 16,2021)\n\nBump pandas version from <1.3 to <1.4\nFixing Python deprecation warnings.\nAdded more type-hints.\nMarked HeartBeatTimer threads as daemon threads.\nForce cast a column into integer in write_pandas to avoid a rare behavior that would lead to crashing.\nImplement AWS signature V4 to new SDKless PUT and GET.\nRemoved a deprecated setuptools option from setup.py.\nFixed a bug where error logs would be printed for query executions that produce no results.\nFixed a bug where the temporary stage for bulk array inserts exists.\n\n\n\nv2.6.0(August 29,2021)\n\nInternal change to the implementation of result fetching.\nUpgraded Pyarrow version from 3.0 to 5.0.\nInternal change to the implementation for PUT and GET. A new connection parameter use_new_put_get was added to toggle between implementations.\nFixed a bug where executemany did not detect the type of data it was inserting.\nUpdated the minimum Mac OSX build target from 10.13 to 10.14.\n\n\n\nv2.5.1(July 31,2021)\n\nFixes Python Connector bug that prevents the connector from using AWS S3 Regional URL. The driver currently overrides the regional URL information with the default S3 URL causing failure in PUT.\n\n\n\nv2.5.0(July 22,2021)\n\nFixed a bug in write_pandas when quote_identifiers is set to True the function would not actually quote column names.\nBumping idna dependency pin from <3,>=2.5 to >=2.5,<4\nFix describe method when running insert into ... commands\n\n\n\nv2.4.6(June 25,2021)\n\nFixed a potential memory leak.\nRemoved upper certifi version pin.\nUpdated vendored libraries , urllib(1.26.5) and requests(2.25.1).\nReplace pointers with UniqueRefs.\nChanged default value of client_session_keep_alive to None.\nAdded the ability to retrieve metadata/schema without executing the query (describe method).\n\n\n\nv2.4.5(June 15,2021)\n\nFix for incorrect JWT token invalidity when an account alias with a dash in it is used for regionless account URL.\n\n\n\nv2.4.4(May 30,2021)\n\nFixed a segfault issue when using DictCursor and arrow result format with out of range dates.\nAdds new make_pd_writer helper function\n\n\n\nv2.4.3(April 29,2021)\n\nUses s3 regional URL in private links when a param is set.\nNew Arrow NUMBER to Decimal converter option.\nUpdate pyopenssl requirement from <20.0.0,>=16.2.0 to >=16.2.0,<21.0.0.\nUpdate pandas requirement from <1.2.0,>=1.0.0 to >=1.0.0,<1.3.0.\nUpdate numpy requirement from <1.20.0 to <1.21.0.\n\n\n\nv2.4.2(April 03,2021)\n\nPUT statements are now thread-safe.\n\n\n\nv2.4.1(March 04,2021)\n\nMake connection object exit() aware of status of parameter autocommit\n\n\n\nv2.4.0(March 04,2021)\n\nAdded support for Python 3.9 and PyArrow 3.0.x.\nAdded support for the upcoming multipart PUT threshold keyword.\nAdded support for using the PUT command with a file-like object.\nAdded some compilation flags to ease building conda community package.\nRemoved the pytz pin because it doesn't follow semantic versioning release format.\nAdded support for optimizing batch inserts through bulk array binding.\n\n\n\nv2.3.10(February 01,2021)\n\nImproved query ID logging and added request GUID logging.\nFor dependency checking, increased the version condition for the pyjwt package from <2.0.0 to <3.0.0.\n\n\n\nv2.3.9(January 27,2021)\n\nThe fix to add proper proxy CONNECT headers for connections made over proxies.\n\n\n\nv2.3.8(January 14,2021)\n\nArrow result conversion speed up.\nSend all Python Connector exceptions to in-band or out-of-band telemetry.\nVendoring requests and urllib3 to contain OCSP monkey patching to our library only.\nDeclare dependency on setuptools.\n\n\n\nv2.3.7(December 10,2020)\n\nAdded support for upcoming downscoped GCS credentials.\nTightened the pyOpenSSL dependency pin.\nRelaxed the boto3 dependency pin up to the next major release.\nRelaxed the cffi dependency pin up to the next major release.\nAdded support for executing asynchronous queries.\nDropped support for Python 3.5.\n\n\n\nv2.3.6(November 16,2020)\n\nFixed a bug that was preventing the connector from working on Windows with Python 3.8.\nImproved the string formatting in exception messages.\nFor dependency checking, increased the version condition for the cryptography package from <3.0.0 to <4.0.0.\nFor dependency checking, increased the version condition for the pandas package from <1.1 to <1.2.\n\n\n\nv2.3.5(November 03,2020)\n\nUpdated the dependency on the cryptography package from version 2.9.2 to 3.2.1.\n\n\n\nv2.3.4(October 26,2020)\n\nAdded an optional parameter to the write_pandas function to specify that identifiers should not be quoted before being sent to the server.\nThe write_pandas function now honors default and auto-increment values for columns when inserting new rows.\nUpdated the Python Connector OCSP error messages and accompanying telemetry Information.\nEnabled the runtime pyarrow version verification to fail gracefully. Fixed a bug with AWS glue environment.\nUpgraded the version of boto3 from 1.14.47 to 1.15.9.\nUpgraded the version of idna from 2.9 to 2.10.\n\n\n\nv2.3.3(October 05,2020)\n\nSimplified the configuration files by consolidating test settings.\nIn the Connection object, the execute_stream and execute_string methods now filter out empty lines from their inputs.\n\n\n\nv2.3.2(September 14,2020)\n\nFixed a bug where a file handler was not closed properly.\nFixed various documentation typos.\n\n\n\nv2.3.1(August 25,2020)\n\nFixed a bug where 2 constants were removed by mistake.\n\n\n\nv2.3.0(August 24,2020)\n\nWhen the log level is set to DEBUG, log the OOB telemetry entries that are sent to Snowflake.\nFixed a bug in the PUT command where long running PUTs would fail to re-authenticate to GCP for storage.\nUpdated the minimum build target MacOS version to 10.13.\n\n\n\nv2.2.10(August 03,2020)\n\nImproved an error message for when \"pandas\" optional dependency group is not installed and user tries to fetch data into a pandas DataFrame. It'll now point user to our online documentation.\n\n\n\nv2.2.9(July 13,2020)\n\nConnection parameter validate_default_parameters now verifies known connection parameter names and types. It emits warnings for anything unexpected types or names.\nCorrect logging messages for compiled C++ code.\nFixed an issue in write_pandas with location determination when database, or schema name was included.\nBumped boto3 dependency version.\nFixed an issue where uploading a file with special UTF-8 characters in their names corrupted file.\n\n\n\nv2.2.8(June 22,2020)\n\nSwitched docstring style to Google from Epydoc and added automated tests to enforce the standard.\nFixed a memory leak in DictCursor's Arrow format code.\n\n\n\nv2.2.7(June 1,2020)\n\nSupport azure-storage-blob v12 as well as v2 (for Python 3.5.0-3.5.1) by Python Connector\nFixed a bug where temporary directory path was not Windows compatible in write_pandas function\nAdded out of band telemetry error reporting of unknown errors\n\n\n\nv2.2.6(May 11,2020)\n\nUpdate Pyarrow version from 0.16.0 to 0.17.0 for Python connector\nRemove more restrictive application name enforcement.\nMissing keyring dependency will not raise an exception, only emit a debug log from now on.\nBumping boto3 to <1.14\nFix flake8 3.8.0 new issues\nImplement Python log interceptor\n\n\n\nv2.2.5(April 30,2020)\n\nAdded more efficient way to ingest a pandas.Dataframe into Snowflake, located in snowflake.connector.pandas_tools\nMore restrictive application name enforcement and standardizing it with other Snowflake drivers\nAdded checking and warning for users when they have a wrong version of pyarrow installed\n\n\n\nv2.2.4(April 10,2020)\n\nEmit warning only if trying to set different setting of use_openssl_only parameter\n\n\n\nv2.2.3(March 30,2020)\n\nSecure SSO ID Token\nAdd use_openssl_only connection parameter, which disables the usage of pure Python cryptographic libraries for FIPS\nAdd manylinux1 as well as manylinux2010\nFix a bug where a certificate file was opened and never closed in snowflake-connector-python.\nFix python connector skips validating GCP URLs\nAdds additional client driver config information to in band telemetry.\n\n\n\nv2.2.2(March 9,2020)\n\nFix retry with chunck_downloader.py for stability.\nSupport Python 3.8 for Linux and Mac.\n\n\n\nv2.2.1(February 18,2020)\n\nFix use DictCursor with execute_string #248\n\n\n\nv2.2.0(January 27,2020)\n\nDrop Python 2.7 support\nAWS: When OVERWRITE is false, which is set by default, the file is uploaded if no same file name exists in the stage. This used to check the content signature but it will no longer check. Azure and GCP already work this way.\nDocument Python connector dependencies on our GitHub page in addition to Snowflake docs.\nFix sqlalchemy and possibly python-connector warnings.\nFix GCP exception using the Python connector to PUT a file in a stage with auto_compress=false.\nBump up botocore requirements to 1.14.\nFix uppercaseing authenticator breaks Okta URL which may include case-sensitive elements(#257).\nFix wrong result bug while using fetch_pandas_all() to get fixed numbers with large scales.\nIncrease multi part upload threshold for S3 to 64MB.\n\n\n\nv2.1.3(January 06,2020)\n\nFix GCP Put failed after hours\n\n\n\nv2.1.2(December 16,2019)\n\nFix the arrow bundling issue for python connector on mac.\nFix the arrow dll bundle issue on windows.Add more logging.\n\n\n\nv2.1.1(December 12,2019)\n\nFix GZIP uncompressed content for Azure GET command.\nAdd support for GCS PUT and GET for private preview.\nSupport fetch as numpy value in arrow result format.\nFix NameError: name 'EmptyPyArrowIterator' is not defined for Mac.\nReturn empty dataframe for fetch_pandas_all() api if result set is empty.\n\n\n\nv2.1.0(December 2,2019)\n\nFix default ssl_context options\nPin more dependencies for Python Connector\nFix import of SnowflakeOCSPAsn1Crypto crashes Python on MacOS Catalina\nUpdate the release note that 1.9.0 was removed\nSupport DictCursor for arrow result format\nUpgrade Python's arrow lib to 0.15.1\nRaise Exception when PUT fails to Upload Data\nHandle year out of range correctly in arrow result format\n\n\n\nv2.0.4(November 13,2019)\n\nIncrease OCSP Cache expiry time from 24 hours to 120 hours.\nFix pyarrow cxx11 abi compatibility issue\nUse new query result format parameter in python tests\n\n\n\nv2.0.3(November 1,2019)\n\nFix for ,Pandas fetch API did not handle the case that first chunk is empty correctly.\nUpdated with botocore, boto3 and requests packages to the latest version.\nPinned stable versions of Azure urllib3 packages.\n\n\n\nv2.0.2(October 21,2019)\n\nFix sessions remaining open even if they are disposed manually. Retry deleting session if the connection is explicitly closed.\nFix memory leak in the new fetch pandas API\nFix Auditwheel failed with python37\nReduce the footprint of Python Connector\nSupport asn1crypto 1.1.x\nEnsure that the cython components are present for Conda package\n\n\n\nv2.0.1(October 04,2019)\n\nAdd asn1crypto requirement to mitigate incompatibility change\n\n\n\nv2.0.0(September 30,2019)\n\nRelease Python Connector 2.0.0 for Arrow format change.\nFix\u00a0SF_OCSP_RESPONSE_CACHE_DIR referring to the OCSP cache response file directory and not the top level of directory.\nFix Malformed certificate ID key causes uncaught KeyError.\nNo retry for certificate errors.\nFix In-Memory OCSP Response Cache - PythonConnector\nMove AWS_ID and AWS_SECRET_KEY to their newer versions in the Python client\nFix result set downloader for ijson 2.5\nMake authenticator field case insensitive earlier\nUpdate\u00a0USER-AGENT to be consistent with new format\nUpdate Python Driver URL Whitelist to support US Gov domain\nFix memory leak in python connector panda df fetch API\n\n\n\nv1.9.1(October 4,2019)\n\nAdd asn1crypto requirement to mitigate incompatibility change.\n\n\n\nv1.9.0(August 26,2019) REMOVED from pypi due to dependency compatibility issues\n\nImplement converter for all arrow data types in python connector extension\nFix arrow error when returning empty result using python connecter\nFix OCSP responder hang, AttributeError: 'ReadTimeout' object has no attribute 'message'\nUpdate OCSP Connection timeout.\nFix RevokedCertificateError OOB Telemetry events are not sent\nUncaught RevocationCheckError for FAIL_OPEN in create_pair_issuer_subject\nFix uncaught exception in generate_telemetry_data function\nFix connector looses context after connection drop/restore by retrying IncompleteRead error.\nMake tzinfo class at the module level instead of inlining\n\n\n\nv1.8.7(August 12,2019)\n\nRewrote validateDefaultParameters to validate the database, schema and warehouse at connection time. False by default.\nFix OCSP Server URL problem in multithreaded env\nFix Azure Gov PUT and GET issue\n\n\n\nv1.8.6(July 29,2019)\n\nReduce retries for OCSP from Python Driver\nAzure PUT issue: ValueError: I/O operation on closed file\nAdd client information to USER-AGENT HTTP header - PythonConnector\nBetter handling of OCSP cache download failure\n\n\n\nv1.8.5(July 15,2019)\n\nDrop Python 3.4 support for Python Connector\n\n\n\nv1.8.4(July 01,2019)\n\nUpdate Python Connector to discard invalid OCSP Responses while merging caches\n\n\n\nv1.8.3(June 17,2019)\n\nUpdate Client Driver OCSP Endpoint URL for Private Link Customers\nIgnore session gone 390111 when closing\nPython3.4 using requests 2.21.0 needs older version of urllib3\nUse Account Name for Global URL\n\n\n\nv1.8.2 (June 03,2019)\n\nPendulum datatype support\n\n\n\nv1.8.1 (May 20,2019)\n\nRevoked OCSP Responses persists in Driver Cache + Logging Fix\nFixed DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated\n\n\n\nv1.8.0 (May 10, 2019)\n\nsupport numpy.bool_ in binding type\nAdd Option to Skip Request Pooling\nAdd OCSP_MODE metric\nFixed PUT URI issue for Windows path\nOCSP SoftFail\n\n\n\nv1.7.11 (April 22, 2019)\n\nnumpy timestamp with timezone support\nqmark not binding None\n\n\n\nv1.7.10 (April 8, 2019)\n\nFix the incorrect custom Server URL in Python Driver for Privatelink\n\n\n\nv1.7.9 (March 25,2019)\n\nPython Interim Solution for Custom Cache Server URL\nInternal change for pending feature\n\n\n\nv1.7.8 (March 12,2019)\n\nAdd OCSP signing certificate validity check\n\n\n\nv1.7.7 (February 22,2019)\n\nSkip HEAD operation when OVERWRITE=true for PUT\nUpdate copyright year from 2018 to 2019 for Python\n\n\n\nv1.7.6 (February 08,2019)\n\nAdjusted pyasn1 and pyasn1-module requirements for Python Connector\nAdded idna to setup.py. made pyasn1 optional for Python2\n\n\n\nv1.7.5 (January 25, 2019)\n\nIncorporate \"kwargs\" style group of key-value pairs in connection's \"execute_string\" function.\n\n\n\nv1.7.4 (January 3, 2019)\n\nInvalidate outdated OCSP response when checking cache hit\nMade keyring use optional in Python Connector\nAdded SnowflakeNullConverter for Python Connector to skip all client side conversions\nHonor CLIENT_PREFETCH_THREADS to download the result set.\nFixed the hang when region=us-west-2 is specified.\nAdded Python 3.7 tests\n\n\n\nv1.7.3 (December 11, 2018)\n\nImproved the progress bar control for SnowSQL\nFixed PUT/GET progress bar for Azure\n\n\n\nv1.7.2 (December 4, 2018)\n\nRefactored OCSP checks\nAdjusted log level to mitigate confusions\n\n\n\nv1.7.1 (November 27, 2018)\n\nFixed regex pattern warning in cursor.py\nFixed 403 error for EU deployment\nFixed the epoch time to datetime object converter for Windoww\n\n\n\nv1.7.0 (November 13, 2018)\n\nInternal change for pending feature.\n\n\n\nv1.6.12 (October 30, 2018)\n\nUpdated boto3 and botocore version dependeny.\nCatch socket.EAI_NONAME for localhost socket and raise a better error message\nAdded client_session_keep_alive_heartbeat_frequency to control heartbeat timings for client_session_keep_alive.\n\n\n\nv1.6.11 (October 23, 2018)\n\nFixed exit_on_error=true didn't work if PUT / GET error occurs\nFixed a backslash followed by a quote in a literal was not taken into account.\nAdded request_guid to each HTTP request for tracing.\n\n\n\nv1.6.10 (September 25, 2018)\n\nAdded client_session_keep_alive support.\nFixed multiline double quote expressions PR #117 (@bensowden)\nFixed binding datetime for TIMESTAMP type in qmark binding mode. PR #118 (@rhlahuja)\nRetry HTTP 405 to mitigate Nginx bug.\nAccept consent response for id token cache. WIP.\n\n\n\nv1.6.9 (September 13, 2018)\n\nChanged most INFO logs to DEBUG. Added INFO for key operations.\nFixed the URL query parser to get multiple values.\n\n\n\nv1.6.8 (August 30, 2018)\n\nUpdated boto3 and botocore version dependeny.\n\n\n\nv1.6.7 (August 22, 2018)\n\nEnforce virtual host URL for PUT and GET.\nAdded retryCount, clientStarTime for query-request for better service.\n\n\n\nv1.6.6 (August 9, 2018)\n\nReplaced pycryptodome with pycryptodomex to avoid namespace conflict with PyCrypto.\nFixed hang if the connection is not explicitly closed since 1.6.4.\nReauthenticate for externalbrowser while running a query.\nFixed remove_comments option for SnowSQL.\n\n\n\nv1.6.5 (July 13, 2018)\n\nFixed the current object cache in the connection for id token use.\nAdded no OCSP cache server use option.\n\n\n\nv1.6.4 (July 5, 2018)\n\nFixed div by zero for Azure PUT command.\nCache id token for SSO. This feature is WIP.\nAdded telemetry client and job timings by @dsouzam.\n\n\n\nv1.6.3 (June 14, 2018)\n\nFixed binding long value for Python 2.\n\n\n\nv1.6.2 (June 7, 2018)\n\nRemoves username restriction for OAuth. PR 86(@tjj5036)\nRetry OpenSSL.SysError in tests\nUpdated concurrent insert test as the server improved.\n\n\n\nv1.6.1 (May 17, 2018)\n\nEnable OCSP Dynamic Cache server for privatelink.\nEnsure the type of login_timeout attribute is int.\n\n\n\nv1.6.0 (May 3, 2018)\n\nEnable OCSP Cache server by default.\n\n\n\nv1.5.8 (April 26, 2018)\n\nFixed PUT command error 'Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.' for Azure deployment.\n\n\n\nv1.5.7 (April 19, 2018)\n\nFixed object has no attribute errors in Python3 for Azure deployment.\nRemoved ContentEncoding=gzip from the header for PUT command. This caused COPY failure if autocompress=false.\n\n\n\nv1.5.6 (April 5, 2018)\n\nUpdated boto3 and botocore version dependeny.\n\n\n\nv1.5.5 (March 22, 2018)\n\nFixed TypeError: list indices must be integers or slices, not str. PR/Issue 75 (@daniel-sali).\nUpdated cryptography dependency.\n\n\n\nv1.5.4 (March 15, 2018)\n\nTightened pyasn and pyasn1-modules version requirements\nAdded OS and OS_VERSION session info.\nRelaxed pycryptodome version requirements. No 3.5.0 should be used.\n\n\n\nv1.5.3 (March 9, 2018)\n\nPulled back pyasn1 for OCSP check in Python 2. Python 3 continue using asn1crypto for better performance.\nLimit the upper bound of pycryptodome version to less than 3.5.0 for Issue 65.\n\n\n\nv1.5.2 (March 1, 2018)\n\nFixed failue in case HOME/USERPROFILE is not set.\nUpdated boto3 and botocore version dependeny.\n\n\n\nv1.5.1 (February 15, 2018)\n\nPrototyped oauth. Won't work without the server change.\nRetry OCSP data parse failure\nFixed paramstyle=qmark binding for SQLAlchemy\n\n\n\nv1.5.0 (January 26, 2018)\n\nRemoved pyasn1 and pyasn1-modules from the dependency.\nPrototyped key pair authentication.\nFixed OCSP response cache expiration check.\n\n\n\nv1.4.17 (January 19, 2018)\n\nAdjusted pyasn1 and pyasn1-modules version dependency. PR 48 (@baxen)\nStarted replacing pyasn1 with asn1crypto Not activated yet.\n\n\n\nv1.4.16 (January 16, 2018)\n\nAdded OCSP cache related tools.\n\n\n\nv1.4.15 (January 11, 2018)\n\nAdded OCSP cache server option.\n\n\n\nv1.4.14 (December 14, 2017)\n\nImproved OCSP response dump util.\n\n\n\nv1.4.13 (November 30, 2017)\n\nUpdated boto3 and botocore version dependeny.\n\n\n\nv1.4.12 (November 16, 2017)\n\nAdded qmark and numeric paramstyle support for server side binding.\nAdded timezone session parameter support to connections.\nFixed a file handler leak in OCSP checks.\n\n\n\nv1.4.11 (November 9, 2017)\n\nFixed Azure PUT command to use AES CBC key encryption.\nAdded retry for intermittent PyAsn1Error.\n\n\n\nv1.4.10 (October 26, 2017)\n\nAdded Azure support for PUT and GET commands.\nUpdated cryptography, boto3 and botocore version dependeny.\n\n\n\nv1.4.9 (October 10, 2017)\n\nFixed a regression caused by pyasn1 upgrade.\n\n\n\nv1.4.8 (October 5, 2017)\n\nUpdated Fed/SSO parameters. The production version of Fed/SSO from Python Connector requires this version.\nRefactored for Azure support\nSet CLIENT_APP_ID and CLIENT_APP_VERSION in all requests\nSupport new behaviors of newer version of pyasn1. Relaxed the dependency.\nMaking socket timeout same as the login time\nFixed the case where no error message is attached.\n\n\n\nv1.4.7 (September 20, 2017)\n\nRefresh AWS token in PUT command if S3UploadFailedError includes the ExpiredToken error\nRetry all of 5xx in connection\n\n\n\nv1.4.6 (September 14, 2017)\n\nMitigated sigint handler config failure for SQLAlchemy\nImproved the message for invalid SSL certificate error\nRetry forever for query to mitigate 500 errors\n\n\n\nv1.4.5 (August 31, 2017)\n\nFixed regression in #34 by rewriting SAML 2.0 compliant service application support.\nCleaned up logger by moving instance to module.\n\n\n\nv1.4.4 (August 24, 2017)\n\nFixed Azure blob certificate issue. OCSP response structure bug fix\nAdded SAML 2.0 compliant service application support. preview feature.\nUpgraded SSL wrapper with the latest urllib3 pyopenssl glue module. It uses kqueue, epoll or poll in replacement of select to read data from socket if available.\n\n\n\nv1.4.3 (August 17, 2017)\n\nChanged the log levels for some messages from ERROR to DEBUG to address confusion as real incidents. In fact, they are not real issues but signals for connection retry.\nAdded certifi to the dependent component list to mitigate CA root certificate out of date issue.\nSet the maximum versions of dependent components boto3 and botocore.\nUpdated cryptography and pyOpenSSL version dependeny change.\nAdded a connection parameter validate_default_parameters to validate the default database, schema and warehouse. If the specified object doesn't exist, it raises an error.\n\n\n\nv1.4.2 (August 3, 2017)\n\nFixed retry HTTP 400 in upload file when AWS token expires\nRelaxed the version of dependent components pyasn1 and pyasn1-modules\n\n\n\nv1.4.1 (July 26, 2017)\n\nPinned pyasn1 and pyasn1-modules versions to 0.2.3 and 0.0.9, respectively\n\n\n\nv1.4.0 (July 6, 2017)\n\nRelaxed the versions of dependent components boto3, botocore, cffi and cryptography and pyOpenSSL\nMinor improvements in OCSP response file cache\n\n\n\nv1.3.18 (June 15, 2017)\n\nFixed OCSP response cache file not found issue on Windows. Drive letter was taken off\nUse less restrictive cryptography>=1.7,<1.8\nAdded ORC detection in PUT command\n\n\n\nv1.3.17 (June 1, 2017)\n\nTimeout OCSP request in 60 seconds and retry\nSet autocommit and abort_detached_query session parameters in authentication time if specified\nFixed cross region stage issue. Could not get files in us-west-2 region S3 bucket from us-east-1\n\n\n\nv1.3.16 (April 20, 2017)\n\nFixed issue in fetching DATE causing [Error 22] Invalid argument on Windows\nRetry on RuntimeError in requests\n\n\n\nv1.3.15 (March 30, 2017)\n\nRefactored data converters in fetch to improve performance\nFixed timestamp format FF to honor the scale of data type\nImproved the security of OKTA authentication with hostname verifications\nRetry PUT on the error OpenSSL.SSL.SysCallError 10053 with lower concurrency\nAdded raw_msg attribute to Error class\nRefactored session managements\n\n\n\nv1.3.14 (February 24, 2017)\n\nImproved PUT and GET error handler.\nAdded proxy support to OCSP checks.\nUse proxy parameters for PUT and GET commands.\nAdded sfqid and sqlstate to the results from query results.\nFixed the connection timeout calculation based on login_timeout and network_timeout.\nImproved error messages in case of 403, 502 and 504 HTTP reponse code.\nUpgraded cryptography to 1.7.2, boto3 to 1.4.4 and botocore to 1.5.14.\nRemoved explicit DNS lookup for OCSP URL.\n\n\n\nv1.3.13 (February 9, 2017)\n\nFixed AWS SQS connection error with OCSP checks\nAdded login_timeout and network_timeout parameters to the Connection objects.\nFixed forbidden access error handing\n\n\n\nv1.3.12 (February 2, 2017)\n\nFixed region parameter. One character was truncated from the tail of account name\nImproved performance of fetching data by refactoring fetchone method\n\n\n\nv1.3.11 (January 27, 2017)\n\nFixed the regression in 1.3.8 that caused intermittent 504 errors\n\n\n\nv1.3.10 (January 26, 2017)\n\nCompress data in HTTP requests at all times except empty data or OKTA request\nRefactored FIXED, REAL and TIMESTAMP data fetch to improve performance. This mainly impacts SnowSQL\nAdded region option to support EU deployments better\nIncreased the retry counter for OCSP servers to mitigate intermittent failure\nRefactored HTTP access retry logic\n\n\n\nv1.3.9 (January 16, 2017)\n\nUpgraded botocore to 1.4.93 to fix and boto3 to 1.4.3 to fix the HTTPS request failure in Python 3.6\nFixed python2 incomaptible import http.client\nRetry OCSP validation in case of non-200 HTTP code returned\n\n\n\nv1.3.8 (January 12, 2017)\n\nConvert non-UTF-8 data in the large result set chunk to Unicode replacement characters to avoid decode error.\nUpdated copyright year to 2017.\nUse six package to support both PY2 and PY3 for some functions\nUpgraded cryptography to 1.7.1 to address MacOS Python 3.6 build issue.\nFixed OverflowError caused by invalid range of timetamp data for SnowSQL.\n\n\n\nv1.3.7 (December 8, 2016)\n\nIncreased the validity date acceptance window to prevent OCSP returning invalid responses due to out-of-scope validity dates for certificates.\nEnabled OCSP response cache file by default.\n\n\n\nv1.3.6 (December 1, 2016)\n\nUpgraded cryptography to 1.5.3, pyOpenSSL to 16.2.0 and cffi to 1.9.1.\n\n\n\nv1.3.5 (November 17, 2016)\n\nFixed CA list cache race condition\nAdded retry intermittent 400 HTTP Bad Request error\n\n\n\nv1.3.4 (November 3, 2016)\n\nAdded quoted_name data type support for binding by SQLAlchemy\nNot to compress parquiet file in PUT command\n\n\n\nv1.3.3 (October 20, 2016)\n\nDowngraded botocore to 1.4.37 due to potential regression.\nIncreased the stability of PUT and GET commands\n\n\n\nv1.3.2 (October 12, 2016)\n\nUpgraded botocore to 1.4.52.\nSet the signature version to v4 to AWS client. This impacts PUT, GET commands and fetching large result set.\n\n\n\nv1.3.1 (September 30, 2016)\n\nAdded an account name including subdomain.\n\n\n\nv1.3.0 (September 26, 2016)\n\n\nAdded support for the BINARY data type, which enables support for more Python data types:\n\n\nPython 3:\n\nbytes and bytearray can be used for binding.\nbytes is also used for fetching BINARY data type.\n\n\n\nPython 2:\n\nbytearray can be used for binding\nstr is used for fetching BINARY data type.\n\n\n\n\n\nAdded proxy_user and proxy_password connection parameters for proxy servers that require authentication.\n\n\n\n\nv1.2.8 (August 16, 2016)\n\nUpgraded botocore to 1.4.37.\nAdded Connection.execute_string and Connection.execute_stream to run multiple statements in a string and stream.\nIncreased the stability of fetching data for Python 2.\nRefactored memory usage in fetching large result set (Work in Progress).\n\n\n\nv1.2.7 (July 31, 2016)\n\nFixed snowflake.cursor.rowcount for INSERT ALL.\nForce OCSP cache invalidation after 24 hours for better security.\nUse use_accelerate_endpoint in PUT and GET if Transfer acceleration is enabled for the S3 bucket.\nFixed the side effect of python-future that loads test.py in the current directory.\n\n\n\nv1.2.6 (July 13, 2016)\n\nFixed the AWS token renewal issue with PUT command when uploading uncompressed large files.\n\n\n\nv1.2.5 (July 8, 2016)\n\nAdded retry for errors S3UploadFailedError and RetriesExceededError in PUT and GET, respectively.\n\n\n\nv1.2.4 (July 6, 2016)\n\nAdded max_connection_pool parameter to Connection so that you can specify the maximum number of HTTP/HTTPS connections in the pool.\nMinor enhancements for SnowSQL.\n\n\n\nv1.2.3 (June 29, 2016)\n\nFixed 404 issue in GET command. An extra slash character changed the S3 path and failed to identify the file to download.\n\n\n\nv1.2.2 (June 21, 2016)\n\nUpgraded botocore to 1.4.26.\nAdded retry for 403 error when accessing S3.\n\n\n\nv1.2.1 (June 13, 2016)\n\nImproved fetch performance for data types (part 2): DATE, TIME, TIMESTAMP, TIMESTAMP_LTZ, TIMESTAMP_NTZ and TIMESTAMP_TZ.\n\n\n\nv1.2.0 (June 10, 2016)\n\nImproved fetch performance for data types (part 1): FIXED, REAL, STRING.\n\n\n\nv1.1.5 (June 2, 2016)\n\nUpgraded boto3 to 1.3.1 and botocore and 1.4.22.\nFixed snowflake.cursor.rowcount for DML by snowflake.cursor.executemany.\nAdded numpy data type binding support. numpy.intN, numpy.floatN and numpy.datetime64 can be bound and fetched.\n\n\n\nv1.1.4 (May 21, 2016)\n\nUpgraded cffi to 1.6.0.\nMinor enhancements to SnowSQL.\n\n\n\nv1.1.3 (May 5, 2016)\n\nUpgraded cryptography to 1.3.2.\n\n\n\nv1.1.2 (May 4, 2016)\n\nChanged the dependency of tzlocal optional.\nFixed charmap error in OCSP checks.\n\n\n\nv1.1.1 (Apr 11, 2016)\n\nFixed OCSP revocation check issue with the new certificate and AWS S3.\nUpgraded cryptography to 1.3.1 and pyOpenSSL to 16.0.0.\n\n\n\nv1.1.0 (Apr 4, 2016)\n\nAdded bzip2 support in PUT command. This feature requires a server upgrade.\nReplaced the self contained packages in snowflake._vendor with the dependency of boto3 1.3.0 and botocore 1.4.2.\n\n\n\nv1.0.7 (Mar 21, 2016)\n\nKeep pyOpenSSL at 0.15.1.\n\n\n\nv1.0.6 (Mar 15, 2016)\n\nUpgraded cryptography to 1.2.3.\nAdded support for TIME data type, which is now a Snowflake supported data type. This feature requires a server upgrade.\nAdded snowflake.connector.DistCursor to fetch the results in dict instead of tuple.\nAdded compression to the SQL text and commands.\n\n\n\nv1.0.5 (Mar 1, 2016)\n\nUpgraded cryptography to 1.2.2 and cffi to 1.5.2.\nFixed the conversion from TIMESTAMP_LTZ to datetime in queries.\n\n\n\nv1.0.4 (Feb 15, 2016)\n\nFixed the truncated parallel large result set.\nAdded retry OpenSSL low level errors ETIMEDOUT and ECONNRESET.\nTime out all HTTPS requests so that the Python Connector can retry the job or recheck the status.\nFixed the location of encrypted data for PUT command. They used to be in the same directory as the source data files.\nAdded support for renewing the AWS token used in PUT commands if the token expires.\n\n\n\nv1.0.3 (Jan 13, 2016)\n\n\nAdded support for the BOOLEAN data type (i.e. TRUE or FALSE). This changes the behavior of the binding for the bool type object:\n\nPreviously, bool was bound as a numeric value (i.e. 1 for True, 0 for False).\nNow, bool is bound as native SQL data (i.e. TRUE or FALSE).\n\n\n\nAdded the autocommit method to the Connection object:\n\nBy default, autocommit mode is ON (i.e. each DML statement commits the change).\nIf autocommit mode is OFF, the commit and rollback methods are enabled.\n\n\n\nAvoid segfault issue for cryptography 1.2 in Mac OSX by using 1.1 until resolved.\n\n\n\n\nv1.0.2 (Dec 15, 2015)\n\nUpgraded boto3 1.2.2, botocore 1.3.12.\nRemoved SSLv3 mapping from the initial table.\n\n\n\nv1.0.1 (Dec 8, 2015)\n\nMinor bug fixes.\n\n\n\nv1.0.0 (Dec 1, 2015)\n\nGeneral Availability release.\n\n\n\n"}, {"name": "sniffio", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nsniffio: Sniff out which async library your code is running under\nQuickstart\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nsniffio: Sniff out which async library your code is running under\nYou're writing a library. You've decided to be ambitious, and support\nmultiple async I/O packages, like Trio, and asyncio, and ... You've\nwritten a bunch of clever code to handle all the differences. But...\nhow do you know which piece of clever code to run?\nThis is a tiny package whose only purpose is to let you detect which\nasync library your code is running under.\n\nDocumentation: https://sniffio.readthedocs.io\nBug tracker and source code: https://github.com/python-trio/sniffio\nLicense: MIT or Apache License 2.0, your choice\nContributor guide: https://trio.readthedocs.io/en/latest/contributing.html\nCode of conduct: Contributors are requested to follow our code of\nconduct\nin all project spaces.\n\nThis library is maintained by the Trio project, as a service to the\nasync Python community as a whole.\n\nQuickstart\nfrom sniffio import current_async_library\nimport trio\nimport asyncio\n\nasync def print_library():\n    library = current_async_library()\n    print(\"This is:\", library)\n\n# Prints \"This is trio\"\ntrio.run(print_library)\n\n# Prints \"This is asyncio\"\nasyncio.run(print_library())\nFor more details, including how to add support to new async libraries,\nplease peruse our fine manual.\n\n\n", "description": "Sniffs out which async library code is running under"}, {"name": "smart-open", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nsmart_open \u2014 utils for streaming large files in Python\nWhat?\nWhy?\nHow?\nDocumentation\nInstallation\nBuilt-in help\nMore examples\nCompression Handling\nTransport-specific Options\nS3 Credentials\nIterating Over an S3 Bucket's Contents\nGCS Credentials\nAzure Credentials\nDrop-in replacement of pathlib.Path.open\nHow do I ...?\nExtending smart_open\nTesting smart_open\nComments, bug reports\n\n\n\n\n\nREADME.rst\n\n\n\n\nsmart_open \u2014 utils for streaming large files in Python\n\n \n  \n\nWhat?\nsmart_open is a Python 3 library for efficient streaming of very large files from/to storages such as S3, GCS, Azure Blob Storage, HDFS, WebHDFS, HTTP, HTTPS, SFTP, or local filesystem. It supports transparent, on-the-fly (de-)compression for a variety of different formats.\nsmart_open is a drop-in replacement for Python's built-in open(): it can do anything open can (100% compatible, falls back to native open wherever possible), plus lots of nifty extra stuff on top.\nPython 2.7 is no longer supported. If you need Python 2.7, please use smart_open 1.10.1, the last version to support Python 2.\n\nWhy?\nWorking with large remote files, for example using Amazon's boto3 Python library, is a pain.\nboto3's Object.upload_fileobj() and Object.download_fileobj() methods require gotcha-prone boilerplate to use successfully, such as constructing file-like object wrappers.\nsmart_open shields you from that. It builds on boto3 and other remote storage libraries, but offers a clean unified Pythonic API. The result is less code for you to write and fewer bugs to make.\n\nHow?\nsmart_open is well-tested, well-documented, and has a simple Pythonic API:\n>>> from smart_open import open\n>>>\n>>> # stream lines from an S3 object\n>>> for line in open('s3://commoncrawl/robots.txt'):\n...    print(repr(line))\n...    break\n'User-Agent: *\\n'\n\n>>> # stream from/to compressed files, with transparent (de)compression:\n>>> for line in open('smart_open/tests/test_data/1984.txt.gz', encoding='utf-8'):\n...    print(repr(line))\n'It was a bright cold day in April, and the clocks were striking thirteen.\\n'\n'Winston Smith, his chin nuzzled into his breast in an effort to escape the vile\\n'\n'wind, slipped quickly through the glass doors of Victory Mansions, though not\\n'\n'quickly enough to prevent a swirl of gritty dust from entering along with him.\\n'\n\n>>> # can use context managers too:\n>>> with open('smart_open/tests/test_data/1984.txt.gz') as fin:\n...    with open('smart_open/tests/test_data/1984.txt.bz2', 'w') as fout:\n...        for line in fin:\n...           fout.write(line)\n74\n80\n78\n79\n\n>>> # can use any IOBase operations, like seek\n>>> with open('s3://commoncrawl/robots.txt', 'rb') as fin:\n...     for line in fin:\n...         print(repr(line.decode('utf-8')))\n...         break\n...     offset = fin.seek(0)  # seek to the beginning\n...     print(fin.read(4))\n'User-Agent: *\\n'\nb'User'\n\n>>> # stream from HTTP\n>>> for line in open('http://example.com/index.html'):\n...     print(repr(line))\n...     break\n'<!doctype html>\\n'\nOther examples of URLs that smart_open accepts:\ns3://my_bucket/my_key\ns3://my_key:my_secret@my_bucket/my_key\ns3://my_key:my_secret@my_server:my_port@my_bucket/my_key\ngs://my_bucket/my_blob\nazure://my_bucket/my_blob\nhdfs:///path/file\nhdfs://path/file\nwebhdfs://host:port/path/file\n./local/path/file\n~/local/path/file\nlocal/path/file\n./local/path/file.gz\nfile:///home/user/file\nfile:///home/user/file.bz2\n[ssh|scp|sftp]://username@host//path/file\n[ssh|scp|sftp]://username@host/path/file\n[ssh|scp|sftp]://username:password@host/path/file\n\n\nDocumentation\n\nInstallation\nsmart_open supports a wide range of storage solutions, including AWS S3, Google Cloud and Azure.\nEach individual solution has its own dependencies.\nBy default, smart_open does not install any dependencies, in order to keep the installation size small.\nYou can install these dependencies explicitly using:\npip install smart_open[azure] # Install Azure deps\npip install smart_open[gcs] # Install GCS deps\npip install smart_open[s3] # Install S3 deps\n\nOr, if you don't mind installing a large number of third party libraries, you can install all dependencies using:\npip install smart_open[all]\n\nBe warned that this option increases the installation size significantly, e.g. over 100MB.\nIf you're upgrading from smart_open versions 2.x and below, please check out the Migration Guide.\n\nBuilt-in help\nFor detailed API info, see the online help:\nhelp('smart_open')\nor click here to view the help in your browser.\n\nMore examples\nFor the sake of simplicity, the examples below assume you have all the dependencies installed, i.e. you have done:\npip install smart_open[all]\n\n>>> import os, boto3\n>>> from smart_open import open\n>>>\n>>> # stream content *into* S3 (write mode) using a custom session\n>>> session = boto3.Session(\n...     aws_access_key_id=os.environ['AWS_ACCESS_KEY_ID'],\n...     aws_secret_access_key=os.environ['AWS_SECRET_ACCESS_KEY'],\n... )\n>>> url = 's3://smart-open-py37-benchmark-results/test.txt'\n>>> with open(url, 'wb', transport_params={'client': session.client('s3')}) as fout:\n...     bytes_written = fout.write(b'hello world!')\n...     print(bytes_written)\n12\n# stream from HDFS\nfor line in open('hdfs://user/hadoop/my_file.txt', encoding='utf8'):\n    print(line)\n\n# stream from WebHDFS\nfor line in open('webhdfs://host:port/user/hadoop/my_file.txt'):\n    print(line)\n\n# stream content *into* HDFS (write mode):\nwith open('hdfs://host:port/user/hadoop/my_file.txt', 'wb') as fout:\n    fout.write(b'hello world')\n\n# stream content *into* WebHDFS (write mode):\nwith open('webhdfs://host:port/user/hadoop/my_file.txt', 'wb') as fout:\n    fout.write(b'hello world')\n\n# stream from a completely custom s3 server, like s3proxy:\nfor line in open('s3u://user:secret@host:port@mybucket/mykey.txt'):\n    print(line)\n\n# Stream to Digital Ocean Spaces bucket providing credentials from boto3 profile\nsession = boto3.Session(profile_name='digitalocean')\nclient = session.client('s3', endpoint_url='https://ams3.digitaloceanspaces.com')\ntransport_params = {'client': client}\nwith open('s3://bucket/key.txt', 'wb', transport_params=transport_params) as fout:\n    fout.write(b'here we stand')\n\n# stream from GCS\nfor line in open('gs://my_bucket/my_file.txt'):\n    print(line)\n\n# stream content *into* GCS (write mode):\nwith open('gs://my_bucket/my_file.txt', 'wb') as fout:\n    fout.write(b'hello world')\n\n# stream from Azure Blob Storage\nconnect_str = os.environ['AZURE_STORAGE_CONNECTION_STRING']\ntransport_params = {\n    'client': azure.storage.blob.BlobServiceClient.from_connection_string(connect_str),\n}\nfor line in open('azure://mycontainer/myfile.txt', transport_params=transport_params):\n    print(line)\n\n# stream content *into* Azure Blob Storage (write mode):\nconnect_str = os.environ['AZURE_STORAGE_CONNECTION_STRING']\ntransport_params = {\n    'client': azure.storage.blob.BlobServiceClient.from_connection_string(connect_str),\n}\nwith open('azure://mycontainer/my_file.txt', 'wb', transport_params=transport_params) as fout:\n    fout.write(b'hello world')\n\nCompression Handling\nThe top-level compression parameter controls compression/decompression behavior when reading and writing.\nThe supported values for this parameter are:\n\ninfer_from_extension (default behavior)\ndisable\n.gz\n.bz2\n\nBy default, smart_open determines the compression algorithm to use based on the file extension.\n>>> from smart_open import open, register_compressor\n>>> with open('smart_open/tests/test_data/1984.txt.gz') as fin:\n...     print(fin.read(32))\nIt was a bright cold day in Apri\nYou can override this behavior to either disable compression, or explicitly specify the algorithm to use.\nTo disable compression:\n>>> from smart_open import open, register_compressor\n>>> with open('smart_open/tests/test_data/1984.txt.gz', 'rb', compression='disable') as fin:\n...     print(fin.read(32))\nb'\\x1f\\x8b\\x08\\x08\\x85F\\x94\\\\\\x00\\x031984.txt\\x005\\x8f=r\\xc3@\\x08\\x85{\\x9d\\xe2\\x1d@'\nTo specify the algorithm explicitly (e.g. for non-standard file extensions):\n>>> from smart_open import open, register_compressor\n>>> with open('smart_open/tests/test_data/1984.txt.gzip', compression='.gz') as fin:\n...     print(fin.read(32))\nIt was a bright cold day in Apri\nYou can also easily add support for other file extensions and compression formats.\nFor example, to open xz-compressed files:\n>>> import lzma, os\n>>> from smart_open import open, register_compressor\n\n>>> def _handle_xz(file_obj, mode):\n...      return lzma.LZMAFile(filename=file_obj, mode=mode, format=lzma.FORMAT_XZ)\n\n>>> register_compressor('.xz', _handle_xz)\n\n>>> with open('smart_open/tests/test_data/1984.txt.xz') as fin:\n...     print(fin.read(32))\nIt was a bright cold day in Apri\nlzma is in the standard library in Python 3.3 and greater.\nFor 2.7, use backports.lzma.\n\nTransport-specific Options\nsmart_open supports a wide range of transport options out of the box, including:\n\nS3\nHTTP, HTTPS (read-only)\nSSH, SCP and SFTP\nWebHDFS\nGCS\nAzure Blob Storage\n\nEach option involves setting up its own set of parameters.\nFor example, for accessing S3, you often need to set up authentication, like API keys or a profile name.\nsmart_open's open function accepts a keyword argument transport_params which accepts additional parameters for the transport layer.\nHere are some examples of using this parameter:\n>>> import boto3\n>>> fin = open('s3://commoncrawl/robots.txt', transport_params=dict(client=boto3.client('s3')))\n>>> fin = open('s3://commoncrawl/robots.txt', transport_params=dict(buffer_size=1024))\nFor the full list of keyword arguments supported by each transport option, see the documentation:\nhelp('smart_open.open')\n\nS3 Credentials\nsmart_open uses the boto3 library to talk to S3.\nboto3 has several mechanisms for determining the credentials to use.\nBy default, smart_open will defer to boto3 and let the latter take care of the credentials.\nThere are several ways to override this behavior.\nThe first is to pass a boto3.Client object as a transport parameter to the open function.\nYou can customize the credentials when constructing the session for the client.\nsmart_open will then use the session when talking to S3.\nsession = boto3.Session(\n    aws_access_key_id=ACCESS_KEY,\n    aws_secret_access_key=SECRET_KEY,\n    aws_session_token=SESSION_TOKEN,\n)\nclient = session.client('s3', endpoint_url=..., config=...)\nfin = open('s3://bucket/key', transport_params=dict(client=client))\nYour second option is to specify the credentials within the S3 URL itself:\nfin = open('s3://aws_access_key_id:aws_secret_access_key@bucket/key', ...)\nImportant: The two methods above are mutually exclusive. If you pass an AWS client and the URL contains credentials, smart_open will ignore the latter.\nImportant: smart_open ignores configuration files from the older boto library.\nPort your old boto settings to boto3 in order to use them with smart_open.\n\nIterating Over an S3 Bucket's Contents\nSince going over all (or select) keys in an S3 bucket is a very common operation, there's also an extra function smart_open.s3.iter_bucket() that does this efficiently, processing the bucket keys in parallel (using multiprocessing):\n>>> from smart_open import s3\n>>> # we use workers=1 for reproducibility; you should use as many workers as you have cores\n>>> bucket = 'silo-open-data'\n>>> prefix = 'Official/annual/monthly_rain/'\n>>> for key, content in s3.iter_bucket(bucket, prefix=prefix, accept_key=lambda key: '/201' in key, workers=1, key_limit=3):\n...     print(key, round(len(content) / 2**20))\nOfficial/annual/monthly_rain/2010.monthly_rain.nc 13\nOfficial/annual/monthly_rain/2011.monthly_rain.nc 13\nOfficial/annual/monthly_rain/2012.monthly_rain.nc 13\n\nGCS Credentials\nsmart_open uses the google-cloud-storage library to talk to GCS.\ngoogle-cloud-storage uses the google-cloud package under the hood to handle authentication.\nThere are several options to provide\ncredentials.\nBy default, smart_open will defer to google-cloud-storage and let it take care of the credentials.\nTo override this behavior, pass a google.cloud.storage.Client object as a transport parameter to the open function.\nYou can customize the credentials\nwhen constructing the client. smart_open will then use the client when talking to GCS. To follow allow with\nthe example below, refer to Google's guide\nto setting up GCS authentication with a service account.\nimport os\nfrom google.cloud.storage import Client\nservice_account_path = os.environ['GOOGLE_APPLICATION_CREDENTIALS']\nclient = Client.from_service_account_json(service_account_path)\nfin = open('gs://gcp-public-data-landsat/index.csv.gz', transport_params=dict(client=client))\nIf you need more credential options, you can create an explicit google.auth.credentials.Credentials object\nand pass it to the Client. To create an API token for use in the example below, refer to the\nGCS authentication guide.\nimport os\nfrom google.auth.credentials import Credentials\nfrom google.cloud.storage import Client\ntoken = os.environ['GOOGLE_API_TOKEN']\ncredentials = Credentials(token=token)\nclient = Client(credentials=credentials)\nfin = open('gs://gcp-public-data-landsat/index.csv.gz', transport_params=dict(client=client))\n\nAzure Credentials\nsmart_open uses the azure-storage-blob library to talk to Azure Blob Storage.\nBy default, smart_open will defer to azure-storage-blob and let it take care of the credentials.\nAzure Blob Storage does not have any ways of inferring credentials therefore, passing a azure.storage.blob.BlobServiceClient\nobject as a transport parameter to the open function is required.\nYou can customize the credentials\nwhen constructing the client. smart_open will then use the client when talking to. To follow allow with\nthe example below, refer to Azure's guide\nto setting up authentication.\nimport os\nfrom azure.storage.blob import BlobServiceClient\nazure_storage_connection_string = os.environ['AZURE_STORAGE_CONNECTION_STRING']\nclient = BlobServiceClient.from_connection_string(azure_storage_connection_string)\nfin = open('azure://my_container/my_blob.txt', transport_params=dict(client=client))\nIf you need more credential options, refer to the\nAzure Storage authentication guide.\n\nDrop-in replacement of pathlib.Path.open\nsmart_open.open can also be used with Path objects.\nThe built-in Path.open() is not able to read text from compressed files, so use patch_pathlib to replace it with smart_open.open() instead.\nThis can be helpful when e.g. working with compressed files.\n>>> from pathlib import Path\n>>> from smart_open.smart_open_lib import patch_pathlib\n>>>\n>>> _ = patch_pathlib()  # replace `Path.open` with `smart_open.open`\n>>>\n>>> path = Path(\"smart_open/tests/test_data/crime-and-punishment.txt.gz\")\n>>>\n>>> with path.open(\"r\") as infile:\n...     print(infile.readline()[:41])\n\u0412 \u043d\u0430\u0447\u0430\u043b\u0435 \u0438\u044e\u043b\u044f, \u0432 \u0447\u0440\u0435\u0437\u0432\u044b\u0447\u0430\u0439\u043d\u043e \u0436\u0430\u0440\u043a\u043e\u0435 \u0432\u0440\u0435\u043c\u044f\n\nHow do I ...?\nSee this document.\n\nExtending smart_open\nSee this document.\n\nTesting smart_open\nsmart_open comes with a comprehensive suite of unit tests.\nBefore you can run the test suite, install the test dependencies:\npip install -e .[test]\n\nNow, you can run the unit tests:\npytest smart_open\n\nThe tests are also run automatically with Travis CI on every commit push & pull request.\n\nComments, bug reports\nsmart_open lives on Github. You can file\nissues or pull requests there. Suggestions, pull requests and improvements welcome!\n\nsmart_open is open source software released under the MIT license.\nCopyright (c) 2015-now Radim \u0158eh\u016f\u0159ek.\n\n\n"}, {"name": "slicer", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nslicer [alpha]\nInstallation\nGetting Started\nContact us\n\n\n\n\n\nREADME.md\n\n\n\n\nslicer [alpha]\n\n\n\n\n\n\n(Equal Contribution) Samuel Jenkins & Harsha Nori & Scott Lundberg\nslicer wraps tensor-like objects and provides a uniform slicing interface via __getitem__.\n\nIt supports many data types including:\n\u00a0\u00a0\nnumpy |\npandas |\nscipy |\npytorch |\nlist |\ntuple |\ndict\nAnd enables upgraded slicing functionality on its objects:\n# Handles non-integer indexes for slicing.\nS(df)[:, [\"Age\", \"Income\"]]\n\n# Handles nested slicing in one call.\nS(nested_list)[..., :5]\nIt can also simultaneously slice many objects at once:\n# Gets first elements of both objects.\nS(first=df, second=ar)[0, :]\nThis package has 0 dependencies. Not even one.\nInstallation\nPython 3.6+ | Linux, Mac, Windows\npip install slicer\nGetting Started\nBasic anonymous slicing:\nfrom slicer import Slicer as S\nli = [[1, 2, 3], [4, 5, 6]]\nS(li)[:, 0:2].o\n# [[1, 2], [4, 5]]\ndi = {'x': [1, 2, 3], 'y': [4, 5, 6]}\nS(di)[:, 0:2].o\n# {'x': [1, 2], 'y': [4, 5]}\nBasic named slicing:\nimport pandas as pd\nimport numpy as np\ndf = pd.DataFrame({'A': [1, 3], 'B': [2, 4]})\nar = np.array([[5, 6], [7, 8]])\nsliced = S(first=df, second=ar)[0, :]\nsliced.first\n# A    1\n# B    2\n# Name: 0, dtype: int64\nsliced.second\n# array([5, 6])\nReal example:\nfrom slicer import Slicer as S\nfrom slicer import Alias as A\n\ndata = [[1, 2], [3, 4]]\nvalues = [[5, 6], [7, 8]]\nidentifiers = [\"id1\", \"id1\"]\ninstance_names = [\"r1\", \"r2\"]\nfeature_names = [\"f1\", \"f2\"]\nfull_name = \"A\"\n\nslicer = S(\n    data=data,\n    values=values,\n    # Aliases are objects that also function as slicing keys.\n    # A(obj, dim) where dim informs what dimension it can be sliced on.\n    identifiers=A(identifiers, 0),\n    instance_names=A(instance_names, 0),\n    feature_names=A(feature_names, 1),\n    full_name=full_name,\n)\n\nsliced = slicer[:, 1]  # Tensor-like parallel slicing on all objects\nassert sliced.data == [2, 4]\nassert sliced.instance_names == [\"r1\", \"r2\"]\nassert sliced.feature_names == \"f2\"\nassert sliced.values == [6, 8]\n\nsliced = slicer[\"r1\", \"f2\"]  # Example use of aliasing\nassert sliced.data == 2\nassert sliced.feature_names == \"f2\"\nassert sliced.instance_names == \"r1\"\nassert sliced.values == 6\nContact us\nRaise an issue on GitHub, or contact us at interpret@microsoft.com\n\n\n", "description": "Slicing library that wraps tensor-like objects and provides uniform slicing via getitem"}, {"name": "Shapely", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nShapely\nWhat is a ufunc?\nMultithreading\nUsage\nRequirements\nInstalling Shapely\nIntegration\nSupport\nCopyright & License\n\n\n\n\n\nREADME.rst\n\n\n\n\nShapely\n\n\n\n\n\nManipulation and analysis of geometric objects in the Cartesian plane.\n\nShapely is a BSD-licensed Python package for manipulation and analysis of\nplanar geometric objects. It is using the widely deployed open-source\ngeometry library GEOS (the engine of PostGIS, and a port of JTS).\nShapely wraps GEOS geometries and operations to provide both a feature rich\nGeometry interface for singular (scalar) geometries and higher-performance\nNumPy ufuncs for operations using arrays of geometries.\nShapely is not primarily focused on data serialization formats or coordinate\nsystems, but can be readily integrated with packages that are.\n\nWhat is a ufunc?\nA universal function (or ufunc for short) is a function that operates on\nn-dimensional arrays on an element-by-element fashion and supports array\nbroadcasting. The underlying for loops are implemented in C to reduce the\noverhead of the Python interpreter.\n\nMultithreading\nShapely functions generally support multithreading by releasing the Global\nInterpreter Lock (GIL) during execution. Normally in Python, the GIL prevents\nmultiple threads from computing at the same time. Shapely functions\ninternally release this constraint so that the heavy lifting done by GEOS can\nbe done in parallel, from a single Python process.\n\nUsage\nHere is the canonical example of building an approximately circular patch by\nbuffering a point, using the scalar Geometry interface:\n>>> from shapely import Point\n>>> patch = Point(0.0, 0.0).buffer(10.0)\n>>> patch\n<POLYGON ((10 0, 9.952 -0.98, 9.808 -1.951, 9.569 -2.903, 9.239 -3.827, 8.81...>\n>>> patch.area\n313.6548490545941\nUsing the vectorized ufunc interface (instead of using a manual for loop),\ncompare an array of points with a polygon:\n>>> import shapely\n>>> import numpy as np\n>>> geoms = np.array([Point(0, 0), Point(1, 1), Point(2, 2)])\n>>> polygon = shapely.box(0, 0, 2, 2)\n\n>>> shapely.contains(polygon, geoms)\narray([False,  True, False])\nSee the documentation for more examples and guidance: https://shapely.readthedocs.io\n\nRequirements\nShapely 2.1 requires\n\nPython >=3.8\nGEOS >=3.7\nNumPy >=1.16\n\n\nInstalling Shapely\nWe recommend installing Shapely using one of the available built\ndistributions, for example using pip or conda:\n$ pip install shapely\n# or using conda\n$ conda install shapely --channel conda-forge\nSee the installation documentation\nfor more details and advanced installation instructions.\n\nIntegration\nShapely does not read or write data files, but it can serialize and deserialize\nusing several well known formats and protocols. The shapely.wkb and shapely.wkt\nmodules provide dumpers and loaders inspired by Python's pickle module.\n>>> from shapely.wkt import dumps, loads\n>>> dumps(loads('POINT (0 0)'))\n'POINT (0.0000000000000000 0.0000000000000000)'\nShapely can also integrate with other Python GIS packages using GeoJSON-like\ndicts.\n>>> import json\n>>> from shapely.geometry import mapping, shape\n>>> s = shape(json.loads('{\"type\": \"Point\", \"coordinates\": [0.0, 0.0]}'))\n>>> s\n<POINT (0 0)>\n>>> print(json.dumps(mapping(s)))\n{\"type\": \"Point\", \"coordinates\": [0.0, 0.0]}\n\nSupport\nQuestions about using Shapely may be asked on the GIS StackExchange using the \"shapely\"\ntag.\nBugs may be reported at https://github.com/shapely/shapely/issues.\n\nCopyright & License\nShapely is licensed under BSD 3-Clause license.\nGEOS is available under the terms of GNU Lesser General Public License (LGPL) 2.1 at https://libgeos.org.\n\n\n", "description": "Manipulates and analyzes geometric objects in the Cartesian plane"}, {"name": "shap", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nInstall\nTree ensemble example (XGBoost/LightGBM/CatBoost/scikit-learn/pyspark models)\nNatural language example (transformers)\nDeep learning example with DeepExplainer (TensorFlow/Keras models)\nDeep learning example with GradientExplainer (TensorFlow/Keras/PyTorch models)\nModel agnostic example with KernelExplainer (explains any function)\nSHAP Interaction Values\nSample notebooks\nTreeExplainer\nDeepExplainer\nGradientExplainer\nLinearExplainer\nKernelExplainer\nDocumentation notebooks\nMethods Unified by SHAP\nCitations\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions (see papers for details and citations).\nInstall\nSHAP can be installed from either PyPI or conda-forge:\npip install shap\nor\nconda install -c conda-forge shap\n\nTree ensemble example (XGBoost/LightGBM/CatBoost/scikit-learn/pyspark models)\nWhile SHAP can explain the output of any machine learning model, we have developed a high-speed exact algorithm for tree ensemble methods (see our Nature MI paper). Fast C++ implementations are supported for XGBoost, LightGBM, CatBoost, scikit-learn and pyspark tree models:\nimport xgboost\nimport shap\n\n# train an XGBoost model\nX, y = shap.datasets.boston()\nmodel = xgboost.XGBRegressor().fit(X, y)\n\n# explain the model's predictions using SHAP\n# (same syntax works for LightGBM, CatBoost, scikit-learn, transformers, Spark, etc.)\nexplainer = shap.Explainer(model)\nshap_values = explainer(X)\n\n# visualize the first prediction's explanation\nshap.plots.waterfall(shap_values[0])\n\n\n\nThe above explanation shows features each contributing to push the model output from the base value (the average model output over the training dataset we passed) to the model output. Features pushing the prediction higher are shown in red, those pushing the prediction lower are in blue. Another way to visualize the same explanation is to use a force plot (these are introduced in our Nature BME paper):\n# visualize the first prediction's explanation with a force plot\nshap.plots.force(shap_values[0])\n\n\n\nIf we take many force plot explanations such as the one shown above, rotate them 90 degrees, and then stack them horizontally, we can see explanations for an entire dataset (in the notebook this plot is interactive):\n# visualize all the training set predictions\nshap.plots.force(shap_values)\n\n\n\nTo understand how a single feature effects the output of the model we can plot the SHAP value of that feature vs. the value of the feature for all the examples in a dataset. Since SHAP values represent a feature's responsibility for a change in the model output, the plot below represents the change in predicted house price as RM (the average number of rooms per house in an area) changes. Vertical dispersion at a single value of RM represents interaction effects with other features. To help reveal these interactions we can color by another feature. If we pass the whole explanation tensor to the color argument the scatter plot will pick the best feature to color by. In this case it picks RAD (index of accessibility to radial highways) since that highlights that the average number of rooms per house has less impact on home price for areas with a high RAD value.\n# create a dependence scatter plot to show the effect of a single feature across the whole dataset\nshap.plots.scatter(shap_values[:,\"RM\"], color=shap_values)\n\n\n\nTo get an overview of which features are most important for a model we can plot the SHAP values of every feature for every sample. The plot below sorts features by the sum of SHAP value magnitudes over all samples, and uses SHAP values to show the distribution of the impacts each feature has on the model output. The color represents the feature value (red high, blue low). This reveals for example that a high LSTAT (% lower status of the population) lowers the predicted home price.\n# summarize the effects of all the features\nshap.plots.beeswarm(shap_values)\n\n\n\nWe can also just take the mean absolute value of the SHAP values for each feature to get a standard bar plot (produces stacked bars for multi-class outputs):\nshap.plots.bar(shap_values)\n\n\n\nNatural language example (transformers)\nSHAP has specific support for natural language models like those in the Hugging Face transformers library. By adding coalitional rules to traditional Shapley values we can form games that explain large modern NLP model using very few function evaluations. Using this functionality is as simple as passing a supported transformers pipeline to SHAP:\nimport transformers\nimport shap\n\n# load a transformers pipeline model\nmodel = transformers.pipeline('sentiment-analysis', return_all_scores=True)\n\n# explain the model on two sample inputs\nexplainer = shap.Explainer(model)\nshap_values = explainer([\"What a great movie! ...if you have no taste.\"])\n\n# visualize the first prediction's explanation for the POSITIVE output class\nshap.plots.text(shap_values[0, :, \"POSITIVE\"])\n\n\n\nDeep learning example with DeepExplainer (TensorFlow/Keras models)\nDeep SHAP is a high-speed approximation algorithm for SHAP values in deep learning models that builds on a connection with DeepLIFT described in the SHAP NIPS paper. The implementation here differs from the original DeepLIFT by using a distribution of background samples instead of a single reference value, and using Shapley equations to linearize components such as max, softmax, products, divisions, etc. Note that some of these enhancements have also been since integrated into DeepLIFT. TensorFlow models and Keras models using the TensorFlow backend are supported (there is also preliminary support for PyTorch):\n# ...include code from https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py\n\nimport shap\nimport numpy as np\n\n# select a set of background examples to take an expectation over\nbackground = x_train[np.random.choice(x_train.shape[0], 100, replace=False)]\n\n# explain predictions of the model on four images\ne = shap.DeepExplainer(model, background)\n# ...or pass tensors directly\n# e = shap.DeepExplainer((model.layers[0].input, model.layers[-1].output), background)\nshap_values = e.shap_values(x_test[1:5])\n\n# plot the feature attributions\nshap.image_plot(shap_values, -x_test[1:5])\n\n\n\nThe plot above explains ten outputs (digits 0-9) for four different images. Red pixels increase the model's output while blue pixels decrease the output. The input images are shown on the left, and as nearly transparent grayscale backings behind each of the explanations. The sum of the SHAP values equals the difference between the expected model output (averaged over the background dataset) and the current model output. Note that for the 'zero' image the blank middle is important, while for the 'four' image the lack of a connection on top makes it a four instead of a nine.\nDeep learning example with GradientExplainer (TensorFlow/Keras/PyTorch models)\nExpected gradients combines ideas from Integrated Gradients, SHAP, and SmoothGrad into a single expected value equation. This allows an entire dataset to be used as the background distribution (as opposed to a single reference value) and allows local smoothing. If we approximate the model with a linear function between each background data sample and the current input to be explained, and we assume the input features are independent then expected gradients will compute approximate SHAP values. In the example below we have explained how the 7th intermediate layer of the VGG16 ImageNet model impacts the output probabilities.\nfrom keras.applications.vgg16 import VGG16\nfrom keras.applications.vgg16 import preprocess_input\nimport keras.backend as K\nimport numpy as np\nimport json\nimport shap\n\n# load pre-trained model and choose two images to explain\nmodel = VGG16(weights='imagenet', include_top=True)\nX,y = shap.datasets.imagenet50()\nto_explain = X[[39,41]]\n\n# load the ImageNet class names\nurl = \"https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json\"\nfname = shap.datasets.cache(url)\nwith open(fname) as f:\n    class_names = json.load(f)\n\n# explain how the input to the 7th layer of the model explains the top two classes\ndef map2layer(x, layer):\n    feed_dict = dict(zip([model.layers[0].input], [preprocess_input(x.copy())]))\n    return K.get_session().run(model.layers[layer].input, feed_dict)\ne = shap.GradientExplainer(\n    (model.layers[7].input, model.layers[-1].output),\n    map2layer(X, 7),\n    local_smoothing=0 # std dev of smoothing noise\n)\nshap_values,indexes = e.shap_values(map2layer(to_explain, 7), ranked_outputs=2)\n\n# get the names for the classes\nindex_names = np.vectorize(lambda x: class_names[str(x)][1])(indexes)\n\n# plot the explanations\nshap.image_plot(shap_values, to_explain, index_names)\n\n\n\nPredictions for two input images are explained in the plot above. Red pixels represent positive SHAP values that increase the probability of the class, while blue pixels represent negative SHAP values the reduce the probability of the class. By using ranked_outputs=2 we explain only the two most likely classes for each input (this spares us from explaining all 1,000 classes).\nModel agnostic example with KernelExplainer (explains any function)\nKernel SHAP uses a specially-weighted local linear regression to estimate SHAP values for any model. Below is a simple example for explaining a multi-class SVM on the classic iris dataset.\nimport sklearn\nimport shap\nfrom sklearn.model_selection import train_test_split\n\n# print the JS visualization code to the notebook\nshap.initjs()\n\n# train a SVM classifier\nX_train,X_test,Y_train,Y_test = train_test_split(*shap.datasets.iris(), test_size=0.2, random_state=0)\nsvm = sklearn.svm.SVC(kernel='rbf', probability=True)\nsvm.fit(X_train, Y_train)\n\n# use Kernel SHAP to explain test set predictions\nexplainer = shap.KernelExplainer(svm.predict_proba, X_train, link=\"logit\")\nshap_values = explainer.shap_values(X_test, nsamples=100)\n\n# plot the SHAP values for the Setosa output of the first instance\nshap.force_plot(explainer.expected_value[0], shap_values[0][0,:], X_test.iloc[0,:], link=\"logit\")\n\n\n\nThe above explanation shows four features each contributing to push the model output from the base value (the average model output over the training dataset we passed) towards zero. If there were any features pushing the class label higher they would be shown in red.\nIf we take many explanations such as the one shown above, rotate them 90 degrees, and then stack them horizontally, we can see explanations for an entire dataset. This is exactly what we do below for all the examples in the iris test set:\n# plot the SHAP values for the Setosa output of all instances\nshap.force_plot(explainer.expected_value[0], shap_values[0], X_test, link=\"logit\")\n\n\n\nSHAP Interaction Values\nSHAP interaction values are a generalization of SHAP values to higher order interactions. Fast exact computation of pairwise interactions are implemented for tree models with shap.TreeExplainer(model).shap_interaction_values(X). This returns a matrix for every prediction, where the main effects are on the diagonal and the interaction effects are off-diagonal. These values often reveal interesting hidden relationships, such as how the increased risk of death peaks for men at age 60 (see the NHANES notebook for details):\n\n\n\nSample notebooks\nThe notebooks below demonstrate different use cases for SHAP. Look inside the notebooks directory of the repository if you want to try playing with the original notebooks yourself.\nTreeExplainer\nAn implementation of Tree SHAP, a fast and exact algorithm to compute SHAP values for trees and ensembles of trees.\n\n\nNHANES survival model with XGBoost and SHAP interaction values - Using mortality data from 20 years of followup this notebook demonstrates how to use XGBoost and shap to uncover complex risk factor relationships.\n\n\nCensus income classification with LightGBM - Using the standard adult census income dataset, this notebook trains a gradient boosting tree model with LightGBM and then explains predictions using shap.\n\n\nLeague of Legends Win Prediction with XGBoost - Using a Kaggle dataset of 180,000 ranked matches from League of Legends we train and explain a gradient boosting tree model with XGBoost to predict if a player will win their match.\n\n\nDeepExplainer\nAn implementation of Deep SHAP, a faster (but only approximate) algorithm to compute SHAP values for deep learning models that is based on connections between SHAP and the DeepLIFT algorithm.\n\n\nMNIST Digit classification with Keras - Using the MNIST handwriting recognition dataset, this notebook trains a neural network with Keras and then explains predictions using shap.\n\n\nKeras LSTM for IMDB Sentiment Classification - This notebook trains an LSTM with Keras on the IMDB text sentiment analysis dataset and then explains predictions using shap.\n\n\nGradientExplainer\nAn implementation of expected gradients to approximate SHAP values for deep learning models. It is based on connections between SHAP and the Integrated Gradients algorithm. GradientExplainer is slower than DeepExplainer and makes different approximation assumptions.\n\nExplain an Intermediate Layer of VGG16 on ImageNet - This notebook demonstrates how to explain the output of a pre-trained VGG16 ImageNet model using an internal convolutional layer.\n\nLinearExplainer\nFor a linear model with independent features we can analytically compute the exact SHAP values. We can also account for feature correlation if we are willing to estimate the feature covariance matrix. LinearExplainer supports both of these options.\n\nSentiment Analysis with Logistic Regression - This notebook demonstrates how to explain a linear logistic regression sentiment analysis model.\n\nKernelExplainer\nAn implementation of Kernel SHAP, a model agnostic method to estimate SHAP values for any model. Because it makes no assumptions about the model type, KernelExplainer is slower than the other model type specific algorithms.\n\n\nCensus income classification with scikit-learn - Using the standard adult census income dataset, this notebook trains a k-nearest neighbors classifier using scikit-learn and then explains predictions using shap.\n\n\nImageNet VGG16 Model with Keras - Explain the classic VGG16 convolutional neural network's predictions for an image. This works by applying the model agnostic Kernel SHAP method to a super-pixel segmented image.\n\n\nIris classification - A basic demonstration using the popular iris species dataset. It explains predictions from six different models in scikit-learn using shap.\n\n\nDocumentation notebooks\nThese notebooks comprehensively demonstrate how to use specific functions and objects.\n\n\nshap.decision_plot and shap.multioutput_decision_plot\n\n\nshap.dependence_plot\n\n\nMethods Unified by SHAP\n\n\nLIME: Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. \"Why should i trust you?: Explaining the predictions of any classifier.\" Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2016.\n\n\nShapley sampling values: Strumbelj, Erik, and Igor Kononenko. \"Explaining prediction models and individual predictions with feature contributions.\" Knowledge and information systems 41.3 (2014): 647-665.\n\n\nDeepLIFT: Shrikumar, Avanti, Peyton Greenside, and Anshul Kundaje. \"Learning important features through propagating activation differences.\" arXiv preprint arXiv:1704.02685 (2017).\n\n\nQII: Datta, Anupam, Shayak Sen, and Yair Zick. \"Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems.\" Security and Privacy (SP), 2016 IEEE Symposium on. IEEE, 2016.\n\n\nLayer-wise relevance propagation: Bach, Sebastian, et al. \"On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation.\" PloS one 10.7 (2015): e0130140.\n\n\nShapley regression values: Lipovetsky, Stan, and Michael Conklin. \"Analysis of regression in game theory approach.\" Applied Stochastic Models in Business and Industry 17.4 (2001): 319-330.\n\n\nTree interpreter: Saabas, Ando. Interpreting random forests. http://blog.datadive.net/interpreting-random-forests/\n\n\nCitations\nThe algorithms and visualizations used in this package came primarily out of research in Su-In Lee's lab at the University of Washington, and Microsoft Research. If you use SHAP in your research we would appreciate a citation to the appropriate paper(s):\n\nFor general use of SHAP you can read/cite our NeurIPS paper (bibtex).\nFor TreeExplainer you can read/cite our Nature Machine Intelligence paper (bibtex; free access).\nFor GPUTreeExplainer you can read/cite this article.\nFor force_plot visualizations and medical applications you can read/cite our Nature Biomedical Engineering paper (bibtex; free access).\n\n\n\n\n", "description": "A unified approach to explain the output of any machine learning model."}, {"name": "sentencepiece", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSentencePiece\nTechnical highlights\nComparisons with other implementations\nOverview\nWhat is SentencePiece?\nThe number of unique tokens is predetermined\nTrains from raw sentences\nWhitespace is treated as a basic symbol\nSubword regularization and BPE-dropout\nInstallation\nPython module\nBuild and install SentencePiece command line tools from C++ source\nBuild and install using vcpkg\nDownload and install SentencePiece from signed released wheels\nUsage instructions\nTrain SentencePiece Model\nEncode raw text into sentence pieces/ids\nDecode sentence pieces/ids into raw text\nEnd-to-End Example\nExport vocabulary list\nRedefine special meta tokens\nVocabulary restriction\nAdvanced topics\n\n\n\n\n\nREADME.md\n\n\n\n\nSentencePiece\n\n\n\n\n\n\n\n\nSentencePiece is an unsupervised text tokenizer and detokenizer mainly for\nNeural Network-based text generation systems where the vocabulary size\nis predetermined prior to the neural model training. SentencePiece implements\nsubword units (e.g., byte-pair-encoding (BPE) [Sennrich et al.]) and\nunigram language model [Kudo.])\nwith the extension of direct training from raw sentences. SentencePiece allows us to make a purely end-to-end system that does not depend on language-specific pre/postprocessing.\nThis is not an official Google product.\nTechnical highlights\n\nPurely data driven: SentencePiece trains tokenization and detokenization\nmodels from sentences. Pre-tokenization (Moses tokenizer/MeCab/KyTea) is not always required.\nLanguage independent: SentencePiece treats the sentences just as sequences of Unicode characters. There is no language-dependent logic.\nMultiple subword algorithms: BPE  [Sennrich et al.] and unigram language model [Kudo.] are supported.\nSubword regularization: SentencePiece implements subword sampling for subword regularization and BPE-dropout which help to improve the robustness and accuracy of NMT models.\nFast and lightweight: Segmentation speed is around 50k sentences/sec, and memory footprint is around 6MB.\nSelf-contained: The same tokenization/detokenization is obtained as long as the same model file is used.\nDirect vocabulary id generation: SentencePiece manages vocabulary to id mapping and can directly generate vocabulary id sequences from raw sentences.\nNFKC-based normalization: SentencePiece performs NFKC-based text normalization.\n\nFor those unfamiliar with SentencePiece as a software/algorithm, one can read a gentle introduction here.\nComparisons with other implementations\n\n\n\nFeature\nSentencePiece\nsubword-nmt\nWordPiece\n\n\n\n\nSupported algorithm\nBPE, unigram, char, word\nBPE\nBPE*\n\n\nOSS?\nYes\nYes\nGoogle internal\n\n\nSubword regularization\nYes\nNo\nNo\n\n\nPython Library (pip)\nYes\nNo\nN/A\n\n\nC++ Library\nYes\nNo\nN/A\n\n\nPre-segmentation required?\nNo\nYes\nYes\n\n\nCustomizable normalization (e.g., NFKC)\nYes\nNo\nN/A\n\n\nDirect id generation\nYes\nNo\nN/A\n\n\n\nNote that BPE algorithm used in WordPiece is slightly different from the original BPE.\nOverview\nWhat is SentencePiece?\nSentencePiece is a re-implementation of sub-word units, an effective way to alleviate the open vocabulary\nproblems in neural machine translation. SentencePiece supports two segmentation algorithms, byte-pair-encoding (BPE) [Sennrich et al.] and unigram language model [Kudo.]. Here are the high level differences from other implementations.\nThe number of unique tokens is predetermined\nNeural Machine Translation models typically operate with a fixed\nvocabulary. Unlike most unsupervised word segmentation algorithms, which\nassume an infinite vocabulary, SentencePiece trains the segmentation model such\nthat the final vocabulary size is fixed, e.g., 8k, 16k, or 32k.\nNote that SentencePiece specifies the final vocabulary size for training, which is different from\nsubword-nmt that uses the number of merge operations.\nThe number of merge operations is a BPE-specific parameter and not applicable to other segmentation algorithms, including unigram, word and character.\nTrains from raw sentences\nPrevious sub-word implementations assume that the input sentences are pre-tokenized. This constraint was required for efficient training, but makes the preprocessing complicated as we have to run language dependent tokenizers in advance.\nThe implementation of SentencePiece is fast enough to train the model from raw sentences. This is useful for training the tokenizer and detokenizer for Chinese and Japanese where no explicit spaces exist between words.\nWhitespace is treated as a basic symbol\nThe first step of Natural Language processing is text tokenization. For\nexample, a standard English tokenizer would segment the text \"Hello world.\" into the\nfollowing three tokens.\n\n[Hello] [World] [.]\n\nOne observation is that the original input and tokenized sequence are NOT\nreversibly convertible. For instance, the information that is no space between\n\u201cWorld\u201d and \u201c.\u201d is dropped from the tokenized sequence, since e.g., Tokenize(\u201cWorld.\u201d) == Tokenize(\u201cWorld .\u201d)\nSentencePiece treats the input text just as a sequence of Unicode characters. Whitespace is also handled as a normal symbol. To handle the whitespace as a basic token explicitly, SentencePiece first escapes the whitespace with a meta symbol \"\u2581\" (U+2581) as follows.\n\nHello\u2581World.\n\nThen, this text is segmented into small pieces, for example:\n\n[Hello] [\u2581Wor] [ld] [.]\n\nSince the whitespace is preserved in the segmented text, we can detokenize the text without any ambiguities.\n  detokenized = ''.join(pieces).replace('\u2581', ' ')\n\nThis feature makes it possible to perform detokenization without relying on language-specific resources.\nNote that we cannot apply the same lossless conversions when splitting the\nsentence with standard word segmenters, since they treat the whitespace as a\nspecial symbol. Tokenized sequences do not preserve the necessary information to restore the original sentence.\n\n(en) Hello world.   \u2192 [Hello] [World] [.]   (A space between Hello and World)\n(ja) \u3053\u3093\u306b\u3061\u306f\u4e16\u754c\u3002  \u2192 [\u3053\u3093\u306b\u3061\u306f] [\u4e16\u754c] [\u3002] (No space between \u3053\u3093\u306b\u3061\u306f and \u4e16\u754c)\n\nSubword regularization and BPE-dropout\nSubword regularization [Kudo.] and BPE-dropout Provilkov et al are simple regularization methods\nthat virtually augment training data with on-the-fly subword sampling, which helps to improve the accuracy as well as robustness of NMT models.\nTo enable subword regularization, you would like to integrate SentencePiece library\n(C++/Python) into the NMT system to sample one segmentation for each parameter update, which is different from the standard off-line data preparations. Here's the example of Python library. You can find that 'New York' is segmented differently on each SampleEncode (C++) or encode with enable_sampling=True (Python) calls. The details of sampling parameters are found in sentencepiece_processor.h.\n>>> import sentencepiece as spm\n>>> s = spm.SentencePieceProcessor(model_file='spm.model')\n>>> for n in range(5):\n...     s.encode('New York', out_type=str, enable_sampling=True, alpha=0.1, nbest_size=-1)\n...\n['\u2581', 'N', 'e', 'w', '\u2581York']\n['\u2581', 'New', '\u2581York']\n['\u2581', 'New', '\u2581Y', 'o', 'r', 'k']\n['\u2581', 'New', '\u2581York']\n['\u2581', 'New', '\u2581York']\n\nInstallation\nPython module\nSentencePiece provides Python wrapper that supports both SentencePiece training and segmentation.\nYou can install Python binary package of SentencePiece with.\npip install sentencepiece\n\nFor more detail, see Python module\nBuild and install SentencePiece command line tools from C++ source\nThe following tools and libraries are required to build SentencePiece:\n\ncmake\nC++11 compiler\ngperftools library (optional, 10-40% performance improvement can be obtained.)\n\nOn Ubuntu, the build tools can be installed with apt-get:\n% sudo apt-get install cmake build-essential pkg-config libgoogle-perftools-dev\n\nThen, you can build and install command line tools as follows.\n% git clone https://github.com/google/sentencepiece.git \n% cd sentencepiece\n% mkdir build\n% cd build\n% cmake ..\n% make -j $(nproc)\n% sudo make install\n% sudo ldconfig -v\n\nOn OSX/macOS, replace the last command with sudo update_dyld_shared_cache\nBuild and install using vcpkg\nYou can download and install sentencepiece using the vcpkg dependency manager:\ngit clone https://github.com/Microsoft/vcpkg.git\ncd vcpkg\n./bootstrap-vcpkg.sh\n./vcpkg integrate install\n./vcpkg install sentencepiece\n\nThe sentencepiece port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please create an issue or pull request on the vcpkg repository.\nDownload and install SentencePiece from signed released wheels\nYou can download the wheel from the GitHub releases page.\nWe generate SLSA3 signatures using the OpenSSF's slsa-framework/slsa-github-generator during the release process. To verify a release binary:\n\nInstall the verification tool from slsa-framework/slsa-verifier#installation.\nDownload the provenance file attestation.intoto.jsonl from the GitHub releases page.\nRun the verifier:\n\nslsa-verifier -artifact-path <the-wheel> -provenance attestation.intoto.jsonl -source github.com/google/sentencepiece -tag <the-tag>\npip install wheel_file.whl\nUsage instructions\nTrain SentencePiece Model\n% spm_train --input=<input> --model_prefix=<model_name> --vocab_size=8000 --character_coverage=1.0 --model_type=<type>\n\n\n--input: one-sentence-per-line raw corpus file. No need to run\ntokenizer, normalizer or preprocessor. By default, SentencePiece normalizes\nthe input with Unicode NFKC. You can pass a comma-separated list of files.\n--model_prefix: output model name prefix. <model_name>.model and <model_name>.vocab are generated.\n--vocab_size: vocabulary size, e.g., 8000, 16000, or 32000\n--character_coverage: amount of characters covered by the model, good defaults are: 0.9995 for languages with rich character set like Japanese or Chinese and 1.0 for other languages with small character set.\n--model_type: model type. Choose from unigram (default), bpe, char, or word. The input sentence must be pretokenized when using word type.\n\nUse --help flag to display all parameters for training, or see here for an overview.\nEncode raw text into sentence pieces/ids\n% spm_encode --model=<model_file> --output_format=piece < input > output\n% spm_encode --model=<model_file> --output_format=id < input > output\n\nUse --extra_options flag to insert the BOS/EOS markers or reverse the input sequence.\n% spm_encode --extra_options=eos (add </s> only)\n% spm_encode --extra_options=bos:eos (add <s> and </s>)\n% spm_encode --extra_options=reverse:bos:eos (reverse input and add <s> and </s>)\n\nSentencePiece supports nbest segmentation and segmentation sampling with --output_format=(nbest|sample)_(piece|id) flags.\n% spm_encode --model=<model_file> --output_format=sample_piece --nbest_size=-1 --alpha=0.5 < input > output\n% spm_encode --model=<model_file> --output_format=nbest_id --nbest_size=10 < input > output\n\nDecode sentence pieces/ids into raw text\n% spm_decode --model=<model_file> --input_format=piece < input > output\n% spm_decode --model=<model_file> --input_format=id < input > output\n\nUse --extra_options flag to decode the text in reverse order.\n% spm_decode --extra_options=reverse < input > output\n\nEnd-to-End Example\n% spm_train --input=data/botchan.txt --model_prefix=m --vocab_size=1000\nunigram_model_trainer.cc(494) LOG(INFO) Starts training with :\ninput: \"../data/botchan.txt\"\n... <snip>\nunigram_model_trainer.cc(529) LOG(INFO) EM sub_iter=1 size=1100 obj=10.4973 num_tokens=37630 num_tokens/piece=34.2091\ntrainer_interface.cc(272) LOG(INFO) Saving model: m.model\ntrainer_interface.cc(281) LOG(INFO) Saving vocabs: m.vocab\n\n% echo \"I saw a girl with a telescope.\" | spm_encode --model=m.model\n\u2581I \u2581saw \u2581a \u2581girl \u2581with \u2581a \u2581 te le s c o pe .\n\n% echo \"I saw a girl with a telescope.\" | spm_encode --model=m.model --output_format=id\n9 459 11 939 44 11 4 142 82 8 28 21 132 6\n\n% echo \"9 459 11 939 44 11 4 142 82 8 28 21 132 6\" | spm_decode --model=m.model --input_format=id\nI saw a girl with a telescope.\n\nYou can find that the original input sentence is restored from the vocabulary id sequence.\nExport vocabulary list\n% spm_export_vocab --model=<model_file> --output=<output file>\n\n<output file> stores a list of vocabulary and emission log probabilities. The vocabulary id corresponds to the line number in this file.\nRedefine special meta tokens\nBy default, SentencePiece uses Unknown (<unk>), BOS (<s>) and EOS (</s>) tokens which have the ids of 0, 1, and 2 respectively. We can redefine this mapping in the training phase as follows.\n% spm_train --bos_id=0 --eos_id=1 --unk_id=5 --input=... --model_prefix=... --character_coverage=...\n\nWhen setting -1 id e.g., bos_id=-1, this special token is disabled. Note that the unknown id cannot be disabled.  We can define an id for padding (<pad>) as --pad_id=3. \u00a0\nIf you want to assign another special tokens, please see Use custom symbols.\nVocabulary restriction\nspm_encode accepts a --vocabulary and a --vocabulary_threshold option so that spm_encode will only produce symbols which also appear in the vocabulary (with at least some frequency). The background of this feature is described in subword-nmt page.\nThe usage is basically the same as that of subword-nmt. Assuming that L1 and L2 are the two languages (source/target languages), train the shared spm model, and get resulting vocabulary for each:\n% cat {train_file}.L1 {train_file}.L2 | shuffle > train\n% spm_train --input=train --model_prefix=spm --vocab_size=8000 --character_coverage=0.9995\n% spm_encode --model=spm.model --generate_vocabulary < {train_file}.L1 > {vocab_file}.L1\n% spm_encode --model=spm.model --generate_vocabulary < {train_file}.L2 > {vocab_file}.L2\n\nshuffle command is used just in case because spm_train loads the first 10M lines of corpus by default.\nThen segment train/test corpus with --vocabulary option\n% spm_encode --model=spm.model --vocabulary={vocab_file}.L1 --vocabulary_threshold=50 < {test_file}.L1 > {test_file}.seg.L1\n% spm_encode --model=spm.model --vocabulary={vocab_file}.L2 --vocabulary_threshold=50 < {test_file}.L2 > {test_file}.seg.L2\n\nAdvanced topics\n\nSentencePiece Experiments\nSentencePieceProcessor C++ API\nUse custom text normalization rules\nUse custom symbols\nPython Module\n[Segmentation and training algorithms in detail]\n\n\n\n", "description": "Unsupervised text tokenizer/detokenizer for neural network text generation"}, {"name": "Send2Trash", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nSend2Trash -- Send files to trash on all platforms\nStatus: Additional Help Welcome\nInstallation\nUsage\n\n\n\n\n\nREADME.rst\n\n\n\n\nSend2Trash -- Send files to trash on all platforms\nSend2Trash is a small package that sends files to the Trash (or Recycle Bin) natively and on\nall platforms. On OS X, it uses native FSMoveObjectToTrashSync Cocoa calls or can use pyobjc\nwith NSFileManager. On Windows, it uses native IFileOperation call if on Vista or newer and\npywin32 is installed or falls back to SHFileOperation calls. On other platforms, if PyGObject\nand GIO are available, it will use this.  Otherwise, it will fallback to its own implementation of\nthe trash specifications from freedesktop.org.\nctypes is used to access native libraries, so no compilation is necessary.\nSend2Trash supports Python 2.7 and up (Python 3 is supported).\n\nStatus: Additional Help Welcome\nAdditional help is welcome for supporting this package.  Specifically help with the OSX and Linux\nissues and fixes would be most appreciated.\n\nInstallation\nYou can download it with pip:\n\npython -m pip install -U send2trash\nTo install with pywin32 or pyobjc required specify the extra nativeLib:\n\npython -m pip install -U send2trash[nativeLib]\nor you can download the source from http://github.com/arsenetar/send2trash and install it with:\n>>> python setup.py install\n\n\nUsage\n>>> from send2trash import send2trash\n>>> send2trash('some_file')\n>>> send2trash(['some_file1', 'some_file2'])\nOn Freedesktop platforms (Linux, BSD, etc.), you may not be able to efficiently\ntrash some files. In these cases, an exception send2trash.TrashPermissionError\nis raised, so that the application can handle this case. This inherits from\nPermissionError (OSError on Python 2). Specifically, this affects\nfiles on a different device to the user's home directory, where the root of the\ndevice does not have a .Trash directory, and we don't have permission to\ncreate a .Trash-$UID directory.\nFor any other problem, OSError is raised.\n\n\n", "description": "Safer file deletion by sending to the OS trash/recycle bin."}, {"name": "semver", "readme": "\n\n\n\nREADME.rst\n\n\n\n\nQuickstart\nA Python module for semantic versioning. Simplifies comparing versions.\n\n\n \n \n  \n\n\n \n\nNote\nThis project works for Python 3.7 and greater only. If you are\nlooking for a compatible version for Python 2, use the\nmaintenance branch maint/v2.\nThe last version of semver which supports Python 2.7 to 3.5 will be\n2.x.y However, keep in mind, the major 2 release is frozen: no new\nfeatures nor backports will be integrated.\nWe recommend to upgrade your workflow to Python 3 to gain support,\nbugfixes, and new features.\n\nThe module follows the MAJOR.MINOR.PATCH style:\n\nMAJOR version when you make incompatible API changes,\nMINOR version when you add functionality in a backwards compatible manner, and\nPATCH version when you make backwards compatible bug fixes.\n\nAdditional labels for pre-release and build metadata are supported.\nTo import this library, use:\n>>> import semver\nWorking with the library is quite straightforward. To turn a version string into the\ndifferent parts, use the semver.Version.parse function:\n>>> ver = semver.Version.parse('1.2.3-pre.2+build.4')\n>>> ver.major\n1\n>>> ver.minor\n2\n>>> ver.patch\n3\n>>> ver.prerelease\n'pre.2'\n>>> ver.build\n'build.4'\nTo raise parts of a version, there are a couple of functions available for\nyou. The function semver.Version.bump_major leaves the original object untouched, but\nreturns a new semver.Version instance with the raised major part:\n>>> ver = semver.Version.parse(\"3.4.5\")\n>>> ver.bump_major()\nVersion(major=4, minor=0, patch=0, prerelease=None, build=None)\nIt is allowed to concatenate different \"bump functions\":\n>>> ver.bump_major().bump_minor()\nVersion(major=4, minor=1, patch=0, prerelease=None, build=None)\nTo compare two versions, semver provides the semver.compare function.\nThe return value indicates the relationship between the first and second\nversion:\n>>> semver.compare(\"1.0.0\", \"2.0.0\")\n-1\n>>> semver.compare(\"2.0.0\", \"1.0.0\")\n1\n>>> semver.compare(\"2.0.0\", \"2.0.0\")\n0\nThere are other functions to discover. Read on!\n\n\n", "description": "Simplifies comparing versions using semantic versioning"}, {"name": "seaborn", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nseaborn: statistical data visualization\nDocumentation\nDependencies\nInstallation\nCiting\nTesting\nDevelopment\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\nseaborn: statistical data visualization\n\n\n\n\n\nSeaborn is a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics.\nDocumentation\nOnline documentation is available at seaborn.pydata.org.\nThe docs include a tutorial, example gallery, API reference, FAQ, and other useful information.\nTo build the documentation locally, please refer to doc/README.md.\nDependencies\nSeaborn supports Python 3.8+.\nInstallation requires numpy, pandas, and matplotlib. Some advanced statistical functionality requires scipy and/or statsmodels.\nInstallation\nThe latest stable release (and required dependencies) can be installed from PyPI:\npip install seaborn\n\nIt is also possible to include optional statistical dependencies:\npip install seaborn[stats]\n\nSeaborn can also be installed with conda:\nconda install seaborn\n\nNote that the main anaconda repository lags PyPI in adding new releases, but conda-forge (-c conda-forge) typically updates quickly.\nCiting\nA paper describing seaborn has been published in the Journal of Open Source Software. The paper provides an introduction to the key features of the library, and it can be used as a citation if seaborn proves integral to a scientific publication.\nTesting\nTesting seaborn requires installing additional dependencies; they can be installed with the dev extra (e.g., pip install .[dev]).\nTo test the code, run make test in the source directory. This will exercise the unit tests (using pytest) and generate a coverage report.\nCode style is enforced with flake8 using the settings in the setup.cfg file. Run make lint to check. Alternately, you can use pre-commit to automatically run lint checks on any files you are committing: just run pre-commit install to set it up, and then commit as usual going forward.\nDevelopment\nSeaborn development takes place on Github: https://github.com/mwaskom/seaborn\nPlease submit bugs that you encounter to the issue tracker with a reproducible example demonstrating the problem. Questions about usage are more at home on StackOverflow, where there is a seaborn tag.\n\n\n", "description": "Statistical data visualization using Matplotlib."}, {"name": "scipy", "readme": "\n\n\n\n\n\n\nSciPy (pronounced \u201cSigh Pie\u201d) is an open-source software for mathematics,\nscience, and engineering. It includes modules for statistics, optimization,\nintegration, linear algebra, Fourier transforms, signal and image processing,\nODE solvers, and more.\n\nWebsite: https://scipy.org\nDocumentation: https://docs.scipy.org/doc/scipy/\nDevelopment version of the documentation: https://scipy.github.io/devdocs\nMailing list: https://mail.python.org/mailman3/lists/scipy-dev.python.org/\nSource code: https://github.com/scipy/scipy\nContributing: https://scipy.github.io/devdocs/dev/index.html\nBug reports: https://github.com/scipy/scipy/issues\nCode of Conduct: https://docs.scipy.org/doc/scipy/dev/conduct/code_of_conduct.html\nReport a security vulnerability: https://tidelift.com/docs/security\nCiting in your work: https://www.scipy.org/citing-scipy/\n\nSciPy is built to work with\nNumPy arrays, and provides many user-friendly and efficient numerical routines,\nsuch as routines for numerical integration and optimization. Together, they\nrun on all popular operating systems, are quick to install, and are free of\ncharge. NumPy and SciPy are easy to use, but powerful enough to be depended\nupon by some of the world\u2019s leading scientists and engineers. If you need to\nmanipulate numbers on a computer and display or publish the results, give\nSciPy a try!\nFor the installation instructions, see our install\nguide.\n\nCall for Contributions\nWe appreciate and welcome contributions. Small improvements or fixes are always appreciated; issues labeled as \u201cgood\nfirst issue\u201d may be a good starting point. Have a look at our contributing\nguide.\nWriting code isn\u2019t the only way to contribute to SciPy. You can also:\n\nreview pull requests\ntriage issues\ndevelop tutorials, presentations, and other educational materials\nmaintain and improve our website\ndevelop graphic design for our brand assets and promotional materials\nhelp with outreach and onboard new contributors\nwrite grant proposals and help with other fundraising efforts\n\nIf you\u2019re unsure where to start or how your skills fit in, reach out! You can\nask on the mailing list or here, on GitHub, by leaving a\ncomment on a relevant issue that is already open.\nIf you are new to contributing to open source, this\nguide helps explain why, what,\nand how to get involved.\n\n", "description": "Scientific computing library for Python.", "category": "Data analysis/science"}, {"name": "scikit-learn", "readme": "\n         \n\nscikit-learn is a Python module for machine learning built on top of\nSciPy and is distributed under the 3-Clause BSD license.\nThe project was started in 2007 by David Cournapeau as a Google Summer\nof Code project, and since then many volunteers have contributed. See\nthe About us page\nfor a list of core contributors.\nIt is currently maintained by a team of volunteers.\nWebsite: https://scikit-learn.org\n\nInstallation\n\nDependencies\nscikit-learn requires:\n\nPython (>= 3.8)\nNumPy (>= 1.17.3)\nSciPy (>= 1.5.0)\njoblib (>= 1.1.1)\nthreadpoolctl (>= 2.0.0)\n\n\nScikit-learn 0.20 was the last version to support Python 2.7 and Python 3.4.\nscikit-learn 1.0 and later require Python 3.7 or newer.\nscikit-learn 1.1 and later require Python 3.8 or newer.\nScikit-learn plotting capabilities (i.e., functions start with plot_ and\nclasses end with \u201cDisplay\u201d) require Matplotlib (>= 3.1.3).\nFor running the examples Matplotlib >= 3.1.3 is required.\nA few examples require scikit-image >= 0.16.2, a few examples\nrequire pandas >= 1.0.5, some examples require seaborn >=\n0.9.0 and plotly >= 5.14.0.\n\n\nUser installation\nIf you already have a working installation of numpy and scipy,\nthe easiest way to install scikit-learn is using pip:\npip install -U scikit-learn\nor conda:\nconda install -c conda-forge scikit-learn\nThe documentation includes more detailed installation instructions.\n\n\n\nChangelog\nSee the changelog\nfor a history of notable changes to scikit-learn.\n\n\nDevelopment\nWe welcome new contributors of all experience levels. The scikit-learn\ncommunity goals are to be helpful, welcoming, and effective. The\nDevelopment Guide\nhas detailed information about contributing code, documentation, tests, and\nmore. We\u2019ve included some basic information in this README.\n\nImportant links\n\nOfficial source code repo: https://github.com/scikit-learn/scikit-learn\nDownload releases: https://pypi.org/project/scikit-learn/\nIssue tracker: https://github.com/scikit-learn/scikit-learn/issues\n\n\n\nSource code\nYou can check the latest sources with the command:\ngit clone https://github.com/scikit-learn/scikit-learn.git\n\n\nContributing\nTo learn more about making a contribution to scikit-learn, please see our\nContributing guide.\n\n\nTesting\nAfter installation, you can launch the test suite from outside the source\ndirectory (you will need to have pytest >= 7.1.2 installed):\npytest sklearn\nSee the web page https://scikit-learn.org/dev/developers/contributing.html#testing-and-improving-test-coverage\nfor more information.\n\nRandom number generation can be controlled during testing by setting\nthe SKLEARN_SEED environment variable.\n\n\n\nSubmitting a Pull Request\nBefore opening a Pull Request, have a look at the\nfull Contributing page to make sure your code complies\nwith our guidelines: https://scikit-learn.org/stable/developers/index.html\n\n\n\nProject History\nThe project was started in 2007 by David Cournapeau as a Google Summer\nof Code project, and since then many volunteers have contributed. See\nthe About us page\nfor a list of core contributors.\nThe project is currently maintained by a team of volunteers.\nNote: scikit-learn was previously referred to as scikits.learn.\n\n\nHelp and Support\n\nDocumentation\n\nHTML documentation (stable release): https://scikit-learn.org\nHTML documentation (development version): https://scikit-learn.org/dev/\nFAQ: https://scikit-learn.org/stable/faq.html\n\n\n\nCommunication\n\nMailing list: https://mail.python.org/mailman/listinfo/scikit-learn\nGitter: https://gitter.im/scikit-learn/scikit-learn\nLogos & Branding: https://github.com/scikit-learn/scikit-learn/tree/main/doc/logos\nBlog: https://blog.scikit-learn.org\nCalendar: https://blog.scikit-learn.org/calendar/\nTwitter: https://twitter.com/scikit_learn\nStack Overflow: https://stackoverflow.com/questions/tagged/scikit-learn\nGithub Discussions: https://github.com/scikit-learn/scikit-learn/discussions\nWebsite: https://scikit-learn.org\nLinkedIn: https://www.linkedin.com/company/scikit-learn\nYouTube: https://www.youtube.com/channel/UCJosFjYm0ZYVUARxuOZqnnw/playlists\nFacebook: https://www.facebook.com/scikitlearnofficial/\nInstagram: https://www.instagram.com/scikitlearnofficial/\nTikTok: https://www.tiktok.com/@scikit.learn\n\n\n\nCitation\nIf you use scikit-learn in a scientific publication, we would appreciate citations: https://scikit-learn.org/stable/about.html#citing-scikit-learn\n\n\n", "category": "Machine learning"}, {"name": "scikit-image", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nscikit-image: Image processing in Python\nInstallation\nLicense\nCitation\n\n\n\n\n\nREADME.md\n\n\n\n\nscikit-image: Image processing in Python\n\n\n\n\nWebsite (including documentation): https://scikit-image.org/\nDocumentation: https://scikit-image.org/docs/stable/\nUser forum: https://forum.image.sc/tag/scikit-image\nDeveloper forum: https://discuss.scientific-python.org/c/contributor/skimage\nSource: https://github.com/scikit-image/scikit-image\n\nInstallation\n\npip: pip install scikit-image\nconda: conda install -c conda-forge scikit-image\n\nAlso see installing scikit-image.\nLicense\nSee LICENSE.txt.\nCitation\nIf you find this project useful, please cite:\n\nSt\u00e9fan van der Walt, Johannes L. Sch\u00f6nberger, Juan Nunez-Iglesias,\nFran\u00e7ois Boulogne, Joshua D. Warner, Neil Yager, Emmanuelle\nGouillart, Tony Yu, and the scikit-image contributors.\nscikit-image: Image processing in Python. PeerJ 2:e453 (2014)\nhttps://doi.org/10.7717/peerj.453\n\n\n\n", "category": "Image processing"}, {"name": "rpds-py", "readme": "\n  \nPython bindings to the Rust rpds crate for persistent data structures.\nWhat\u2019s here is quite minimal (in transparency, it was written initially to support replacing pyrsistent in the referencing library).\nIf you see something missing (which is very likely), a PR is definitely welcome to add it.\n\nInstallation\nThe distribution on PyPI is named rpds.py (equivalently rpds-py), and thus can be installed via e.g.:\n$ pip install rpds-py\nNote that if you install rpds-py from source, you will need a Rust toolchain installed, as it is a build-time dependency.\nAn example of how to do so in a Dockerfile can be found here.\nIf you believe you are on a common platform which should have wheels built (i.e. and not need to compile from source), feel free to file an issue or pull request modifying the GitHub action used here to build wheels via maturin.\n\n\nUsage\nMethods in general are named similarly to their rpds counterparts (rather than pyrsistent\u2018s conventions, though probably a full drop-in pyrsistent-compatible wrapper module is a good addition at some point).\n>>> from rpds import HashTrieMap, HashTrieSet, List\n\n>>> m = HashTrieMap({\"foo\": \"bar\", \"baz\": \"quux\"})\n>>> m.insert(\"spam\", 37) == HashTrieMap({\"foo\": \"bar\", \"baz\": \"quux\", \"spam\": 37})\nTrue\n>>> m.remove(\"foo\") == HashTrieMap({\"baz\": \"quux\"})\nTrue\n\n>>> s = HashTrieSet({\"foo\", \"bar\", \"baz\", \"quux\"})\n>>> s.insert(\"spam\") == HashTrieSet({\"foo\", \"bar\", \"baz\", \"quux\", \"spam\"})\nTrue\n>>> s.remove(\"foo\") == HashTrieSet({\"bar\", \"baz\", \"quux\"})\nTrue\n\n>>> L = List([1, 3, 5])\n>>> L.push_front(-1) == List([-1, 1, 3, 5])\nTrue\n>>> L.rest == List([3, 5])\nTrue\n\n"}, {"name": "resampy", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nresampy\nInstallation\n\n\n\n\n\nREADME.md\n\n\n\n\nresampy\n\n\n\n\n\n\n\nEfficient sample rate conversion in Python.\nThis package implements the band-limited sinc interpolation method for sampling rate conversion as described by:\n\nSmith, Julius O. Digital Audio Resampling Home Page\nCenter for Computer Research in Music and Acoustics (CCRMA),\nStanford University, 2015-02-23.\nWeb published at http://ccrma.stanford.edu/~jos/resample/.\n\nInstallation\nresampy can be installed pip by the following command:\npython -m pip install resampy\n\nIt can also be installed by conda as follows:\nconda install -c conda-forge resampy\n\n\n\n", "description": "Audio sample rate conversion."}, {"name": "requests", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nRequests\nInstalling Requests and Supported Versions\nSupported Features & Best\u2013Practices\nAPI Reference and User Guide available on Read the Docs\nCloning the repository\n\n\n\n\n\nREADME.md\n\n\n\n\nRequests\nRequests is a simple, yet elegant, HTTP library.\n>>> import requests\n>>> r = requests.get('https://httpbin.org/basic-auth/user/pass', auth=('user', 'pass'))\n>>> r.status_code\n200\n>>> r.headers['content-type']\n'application/json; charset=utf8'\n>>> r.encoding\n'utf-8'\n>>> r.text\n'{\"authenticated\": true, ...'\n>>> r.json()\n{'authenticated': True, ...}\nRequests allows you to send HTTP/1.1 requests extremely easily. There\u2019s no need to manually add query strings to your URLs, or to form-encode your PUT & POST data \u2014 but nowadays, just use the json method!\nRequests is one of the most downloaded Python packages today, pulling in around 30M downloads / week\u2014 according to GitHub, Requests is currently depended upon by 1,000,000+ repositories. You may certainly put your trust in this code.\n\n\n\nInstalling Requests and Supported Versions\nRequests is available on PyPI:\n$ python -m pip install requests\nRequests officially supports Python 3.7+.\nSupported Features & Best\u2013Practices\nRequests is ready for the demands of building robust and reliable HTTP\u2013speaking applications, for the needs of today.\n\nKeep-Alive & Connection Pooling\nInternational Domains and URLs\nSessions with Cookie Persistence\nBrowser-style TLS/SSL Verification\nBasic & Digest Authentication\nFamiliar dict\u2013like Cookies\nAutomatic Content Decompression and Decoding\nMulti-part File Uploads\nSOCKS Proxy Support\nConnection Timeouts\nStreaming Downloads\nAutomatic honoring of .netrc\nChunked HTTP Requests\n\nAPI Reference and User Guide available on Read the Docs\n\nCloning the repository\nWhen cloning the Requests repository, you may need to add the -c fetch.fsck.badTimezone=ignore flag to avoid an error about a bad commit (see\nthis issue for more background):\ngit clone -c fetch.fsck.badTimezone=ignore https://github.com/psf/requests.git\nYou can also apply this setting to your global Git config:\ngit config --global fetch.fsck.badTimezone ignore\n\n \n\n\n", "description": "Elegant HTTP library in Python.", "category": "Web"}, {"name": "reportlab", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n", "description": "PDF document generation."}, {"name": "regex", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nIntroduction\nNote\nPython 2\nPyPy\nMultithreading\nUnicode\nFlags\nOld vs new behaviour\nCase-insensitive matches in Unicode\nNested sets and set operations\nNotes on named groups\nAdditional features\nAdded \\p{Horiz_Space} and \\p{Vert_Space} (GitHub issue 477)\nAdded support for lookaround in conditional pattern (Hg issue 163)\nAdded POSIX matching (leftmost longest) (Hg issue 150)\nAdded (?(DEFINE)...) (Hg issue 152)\nAdded (*PRUNE), (*SKIP) and (*FAIL) (Hg issue 153)\nAdded \\K (Hg issue 151)\nAdded capture subscripting for expandf and subf/subfn (Hg issue 133)\nAdded support for referring to a group by number using (?P=...)\nFixed the handling of locale-sensitive regexes\nAdded partial matches (Hg issue 102)\n* operator not working correctly with sub() (Hg issue 106)\nAdded capturesdict (Hg issue 86)\nAdded allcaptures and allspans (Git issue 474)\nAllow duplicate names of groups (Hg issue 87)\nAdded fullmatch (issue #16203)\nAdded subf and subfn\nAdded expandf to match object\nDetach searched string\nRecursive patterns (Hg issue 27)\nFull Unicode case-folding is supported\nApproximate \"fuzzy\" matching (Hg issue 12, Hg issue 41, Hg issue 109)\nNamed lists \\L<name> (Hg issue 11)\nStart and end of word\nUnicode line separators\nSet operators\nregex.escape (issue #2650)\nregex.escape (Hg issue 249)\nRepeated captures (issue #7132)\nAtomic grouping (?>...) (issue #433030)\nPossessive quantifiers\nScoped flags (issue #433028)\nDefinition of 'word' character (issue #1693050)\nVariable-length lookbehind\nFlags argument for regex.split, regex.sub and regex.subn (issue #3482)\nPos and endpos arguments for regex.sub and regex.subn\n'Overlapped' argument for regex.findall and regex.finditer\nSplititer\nSubscripting match objects for groups\nNamed groups\nGroup references\nNamed characters \\N{name}\nUnicode codepoint properties, including scripts and blocks\nPOSIX character classes\nSearch anchor \\G\nReverse searching\nMatching a single grapheme \\X\nBranch reset (?|...|...)\nDefault Unicode word boundary\nTimeout\n\n\n\n\n\nREADME.rst\n\n\n\n\n\nIntroduction\nThis regex implementation is backwards-compatible with the standard 're' module, but offers additional functionality.\n\nNote\nThe re module's behaviour with zero-width matches changed in Python 3.7, and this module follows that behaviour when compiled for Python 3.7.\n\nPython 2\nPython 2 is no longer supported. The last release that supported Python 2 was 2021.11.10.\n\nPyPy\nThis module is targeted at CPython. It expects that all codepoints are the same width, so it won't behave properly with PyPy outside U+0000..U+007F because PyPy stores strings as UTF-8.\n\nMultithreading\nThe regex module releases the GIL during matching on instances of the built-in (immutable) string classes, enabling other Python threads to run concurrently. It is also possible to force the regex module to release the GIL during matching by calling the matching methods with the keyword argument concurrent=True. The behaviour is undefined if the string changes during matching, so use it only when it is guaranteed that that won't happen.\n\nUnicode\nThis module supports Unicode 15.0.0. Full Unicode case-folding is supported.\n\nFlags\nThere are 2 kinds of flag: scoped and global. Scoped flags can apply to only part of a pattern and can be turned on or off; global flags apply to the entire pattern and can only be turned on.\nThe scoped flags are: ASCII (?a), FULLCASE (?f), IGNORECASE (?i), LOCALE (?L), MULTILINE (?m), DOTALL (?s), UNICODE (?u), VERBOSE (?x), WORD (?w).\nThe global flags are: BESTMATCH (?b), ENHANCEMATCH (?e), POSIX (?p), REVERSE (?r), VERSION0 (?V0), VERSION1 (?V1).\nIf neither the ASCII, LOCALE nor UNICODE flag is specified, it will default to UNICODE if the regex pattern is a Unicode string and ASCII if it's a bytestring.\nThe ENHANCEMATCH flag makes fuzzy matching attempt to improve the fit of the next match that it finds.\nThe BESTMATCH flag makes fuzzy matching search for the best match instead of the next match.\n\nOld vs new behaviour\nIn order to be compatible with the re module, this module has 2 behaviours:\n\nVersion 0 behaviour (old behaviour, compatible with the re module):\nPlease note that the re module's behaviour may change over time, and I'll endeavour to match that behaviour in version 0.\n\nIndicated by the VERSION0 flag.\nZero-width matches are not handled correctly in the re module before Python 3.7. The behaviour in those earlier versions is:\n.split won't split a string at a zero-width match.\n.sub will advance by one character after a zero-width match.\n\n\nInline flags apply to the entire pattern, and they can't be turned off.\nOnly simple sets are supported.\nCase-insensitive matches in Unicode use simple case-folding by default.\n\n\nVersion 1 behaviour (new behaviour, possibly different from the re module):\n\nIndicated by the VERSION1 flag.\nZero-width matches are handled correctly.\nInline flags apply to the end of the group or pattern, and they can be turned off.\nNested sets and set operations are supported.\nCase-insensitive matches in Unicode use full case-folding by default.\n\n\n\nIf no version is specified, the regex module will default to regex.DEFAULT_VERSION.\n\nCase-insensitive matches in Unicode\nThe regex module supports both simple and full case-folding for case-insensitive matches in Unicode. Use of full case-folding can be turned on using the FULLCASE flag. Please note that this flag affects how the IGNORECASE flag works; the FULLCASE flag itself does not turn on case-insensitive matching.\nVersion 0 behaviour: the flag is off by default.\nVersion 1 behaviour: the flag is on by default.\n\nNested sets and set operations\nIt's not possible to support both simple sets, as used in the re module, and nested sets at the same time because of a difference in the meaning of an unescaped \"[\" in a set.\nFor example, the pattern [[a-z]--[aeiou]] is treated in the version 0 behaviour (simple sets, compatible with the re module) as:\n\nSet containing \"[\" and the letters \"a\" to \"z\"\nLiteral \"--\"\nSet containing letters \"a\", \"e\", \"i\", \"o\", \"u\"\nLiteral \"]\"\n\nbut in the version 1 behaviour (nested sets, enhanced behaviour) as:\n\nSet which is:\nSet containing the letters \"a\" to \"z\"\n\n\nbut excluding:\nSet containing the letters \"a\", \"e\", \"i\", \"o\", \"u\"\n\n\n\nVersion 0 behaviour: only simple sets are supported.\nVersion 1 behaviour: nested sets and set operations are supported.\n\nNotes on named groups\nAll groups have a group number, starting from 1.\nGroups with the same group name will have the same group number, and groups with a different group name will have a different group number.\nThe same name can be used by more than one group, with later captures 'overwriting' earlier captures. All the captures of the group will be available from the captures method of the match object.\nGroup numbers will be reused across different branches of a branch reset, eg. (?|(first)|(second)) has only group 1. If groups have different group names then they will, of course, have different group numbers, eg. (?|(?P<foo>first)|(?P<bar>second)) has group 1 (\"foo\") and group 2 (\"bar\").\nIn the regex (\\s+)(?|(?P<foo>[A-Z]+)|(\\w+) (?P<foo>[0-9]+) there are 2 groups:\n\n(\\s+) is group 1.\n(?P<foo>[A-Z]+) is group 2, also called \"foo\".\n(\\w+) is group 2 because of the branch reset.\n(?P<foo>[0-9]+) is group 2 because it's called \"foo\".\n\nIf you want to prevent (\\w+) from being group 2, you need to name it (different name, different group number).\n\nAdditional features\nThe issue numbers relate to the Python bug tracker, except where listed otherwise.\n\nAdded \\p{Horiz_Space} and \\p{Vert_Space} (GitHub issue 477)\n\\p{Horiz_Space} or \\p{H} matches horizontal whitespace and \\p{Vert_Space} or \\p{V} matches vertical whitespace.\n\nAdded support for lookaround in conditional pattern (Hg issue 163)\nThe test of a conditional pattern can be a lookaround.\n>>> regex.match(r'(?(?=\\d)\\d+|\\w+)', '123abc')\n<regex.Match object; span=(0, 3), match='123'>\n>>> regex.match(r'(?(?=\\d)\\d+|\\w+)', 'abc123')\n<regex.Match object; span=(0, 6), match='abc123'>\nThis is not quite the same as putting a lookaround in the first branch of a pair of alternatives.\n>>> print(regex.match(r'(?:(?=\\d)\\d+\\b|\\w+)', '123abc'))\n<regex.Match object; span=(0, 6), match='123abc'>\n>>> print(regex.match(r'(?(?=\\d)\\d+\\b|\\w+)', '123abc'))\nNone\nIn the first example, the lookaround matched, but the remainder of the first branch failed to match, and so the second branch was attempted, whereas in the second example, the lookaround matched, and the first branch failed to match, but the second branch was not attempted.\n\nAdded POSIX matching (leftmost longest) (Hg issue 150)\nThe POSIX standard for regex is to return the leftmost longest match. This can be turned on using the POSIX flag.\n>>> # Normal matching.\n>>> regex.search(r'Mr|Mrs', 'Mrs')\n<regex.Match object; span=(0, 2), match='Mr'>\n>>> regex.search(r'one(self)?(selfsufficient)?', 'oneselfsufficient')\n<regex.Match object; span=(0, 7), match='oneself'>\n>>> # POSIX matching.\n>>> regex.search(r'(?p)Mr|Mrs', 'Mrs')\n<regex.Match object; span=(0, 3), match='Mrs'>\n>>> regex.search(r'(?p)one(self)?(selfsufficient)?', 'oneselfsufficient')\n<regex.Match object; span=(0, 17), match='oneselfsufficient'>\nNote that it will take longer to find matches because when it finds a match at a certain position, it won't return that immediately, but will keep looking to see if there's another longer match there.\n\nAdded (?(DEFINE)...) (Hg issue 152)\nIf there's no group called \"DEFINE\", then ... will be ignored except that any groups defined within it can be called and that the normal rules for numbering groups still apply.\n>>> regex.search(r'(?(DEFINE)(?P<quant>\\d+)(?P<item>\\w+))(?&quant) (?&item)', '5 elephants')\n<regex.Match object; span=(0, 11), match='5 elephants'>\n\nAdded (*PRUNE), (*SKIP) and (*FAIL) (Hg issue 153)\n(*PRUNE) discards the backtracking info up to that point. When used in an atomic group or a lookaround, it won't affect the enclosing pattern.\n(*SKIP) is similar to (*PRUNE), except that it also sets where in the text the next attempt to match will start. When used in an atomic group or a lookaround, it won't affect the enclosing pattern.\n(*FAIL) causes immediate backtracking. (*F) is a permitted abbreviation.\n\nAdded \\K (Hg issue 151)\nKeeps the part of the entire match after the position where \\K occurred; the part before it is discarded.\nIt does not affect what groups return.\n>>> m = regex.search(r'(\\w\\w\\K\\w\\w\\w)', 'abcdef')\n>>> m[0]\n'cde'\n>>> m[1]\n'abcde'\n>>>\n>>> m = regex.search(r'(?r)(\\w\\w\\K\\w\\w\\w)', 'abcdef')\n>>> m[0]\n'bc'\n>>> m[1]\n'bcdef'\n\nAdded capture subscripting for expandf and subf/subfn (Hg issue 133)\nYou can use subscripting to get the captures of a repeated group.\n>>> m = regex.match(r\"(\\w)+\", \"abc\")\n>>> m.expandf(\"{1}\")\n'c'\n>>> m.expandf(\"{1[0]} {1[1]} {1[2]}\")\n'a b c'\n>>> m.expandf(\"{1[-1]} {1[-2]} {1[-3]}\")\n'c b a'\n>>>\n>>> m = regex.match(r\"(?P<letter>\\w)+\", \"abc\")\n>>> m.expandf(\"{letter}\")\n'c'\n>>> m.expandf(\"{letter[0]} {letter[1]} {letter[2]}\")\n'a b c'\n>>> m.expandf(\"{letter[-1]} {letter[-2]} {letter[-3]}\")\n'c b a'\n\nAdded support for referring to a group by number using (?P=...)\nThis is in addition to the existing \\g<...>.\n\nFixed the handling of locale-sensitive regexes\nThe LOCALE flag is intended for legacy code and has limited support. You're still recommended to use Unicode instead.\n\nAdded partial matches (Hg issue 102)\nA partial match is one that matches up to the end of string, but that string has been truncated and you want to know whether a complete match could be possible if the string had not been truncated.\nPartial matches are supported by match, search, fullmatch and finditer with the partial keyword argument.\nMatch objects have a partial attribute, which is True if it's a partial match.\nFor example, if you wanted a user to enter a 4-digit number and check it character by character as it was being entered:\n>>> pattern = regex.compile(r'\\d{4}')\n\n>>> # Initially, nothing has been entered:\n>>> print(pattern.fullmatch('', partial=True))\n<regex.Match object; span=(0, 0), match='', partial=True>\n\n>>> # An empty string is OK, but it's only a partial match.\n>>> # The user enters a letter:\n>>> print(pattern.fullmatch('a', partial=True))\nNone\n>>> # It'll never match.\n\n>>> # The user deletes that and enters a digit:\n>>> print(pattern.fullmatch('1', partial=True))\n<regex.Match object; span=(0, 1), match='1', partial=True>\n>>> # It matches this far, but it's only a partial match.\n\n>>> # The user enters 2 more digits:\n>>> print(pattern.fullmatch('123', partial=True))\n<regex.Match object; span=(0, 3), match='123', partial=True>\n>>> # It matches this far, but it's only a partial match.\n\n>>> # The user enters another digit:\n>>> print(pattern.fullmatch('1234', partial=True))\n<regex.Match object; span=(0, 4), match='1234'>\n>>> # It's a complete match.\n\n>>> # If the user enters another digit:\n>>> print(pattern.fullmatch('12345', partial=True))\nNone\n>>> # It's no longer a match.\n\n>>> # This is a partial match:\n>>> pattern.match('123', partial=True).partial\nTrue\n\n>>> # This is a complete match:\n>>> pattern.match('1233', partial=True).partial\nFalse\n\n* operator not working correctly with sub() (Hg issue 106)\nSometimes it's not clear how zero-width matches should be handled. For example, should .* match 0 characters directly after matching >0 characters?\n# Python 3.7 and later\n>>> regex.sub('.*', 'x', 'test')\n'xx'\n>>> regex.sub('.*?', '|', 'test')\n'|||||||||'\n\n# Python 3.6 and earlier\n>>> regex.sub('(?V0).*', 'x', 'test')\n'x'\n>>> regex.sub('(?V1).*', 'x', 'test')\n'xx'\n>>> regex.sub('(?V0).*?', '|', 'test')\n'|t|e|s|t|'\n>>> regex.sub('(?V1).*?', '|', 'test')\n'|||||||||'\n\nAdded capturesdict (Hg issue 86)\ncapturesdict is a combination of groupdict and captures:\ngroupdict returns a dict of the named groups and the last capture of those groups.\ncaptures returns a list of all the captures of a group\ncapturesdict returns a dict of the named groups and lists of all the captures of those groups.\n>>> m = regex.match(r\"(?:(?P<word>\\w+) (?P<digits>\\d+)\\n)+\", \"one 1\\ntwo 2\\nthree 3\\n\")\n>>> m.groupdict()\n{'word': 'three', 'digits': '3'}\n>>> m.captures(\"word\")\n['one', 'two', 'three']\n>>> m.captures(\"digits\")\n['1', '2', '3']\n>>> m.capturesdict()\n{'word': ['one', 'two', 'three'], 'digits': ['1', '2', '3']}\n\nAdded allcaptures and allspans (Git issue 474)\nallcaptures returns a list of all the captures of all the groups.\nallspans returns a list of all the spans of the all captures of all the groups.\n>>> m = regex.match(r\"(?:(?P<word>\\w+) (?P<digits>\\d+)\\n)+\", \"one 1\\ntwo 2\\nthree 3\\n\")\n>>> m.allcaptures()\n(['one 1\\ntwo 2\\nthree 3\\n'], ['one', 'two', 'three'], ['1', '2', '3'])\n>>> m.allspans()\n([(0, 20)], [(0, 3), (6, 9), (12, 17)], [(4, 5), (10, 11), (18, 19)])\n\nAllow duplicate names of groups (Hg issue 87)\nGroup names can be duplicated.\n>>> # With optional groups:\n>>>\n>>> # Both groups capture, the second capture 'overwriting' the first.\n>>> m = regex.match(r\"(?P<item>\\w+)? or (?P<item>\\w+)?\", \"first or second\")\n>>> m.group(\"item\")\n'second'\n>>> m.captures(\"item\")\n['first', 'second']\n>>> # Only the second group captures.\n>>> m = regex.match(r\"(?P<item>\\w+)? or (?P<item>\\w+)?\", \" or second\")\n>>> m.group(\"item\")\n'second'\n>>> m.captures(\"item\")\n['second']\n>>> # Only the first group captures.\n>>> m = regex.match(r\"(?P<item>\\w+)? or (?P<item>\\w+)?\", \"first or \")\n>>> m.group(\"item\")\n'first'\n>>> m.captures(\"item\")\n['first']\n>>>\n>>> # With mandatory groups:\n>>>\n>>> # Both groups capture, the second capture 'overwriting' the first.\n>>> m = regex.match(r\"(?P<item>\\w*) or (?P<item>\\w*)?\", \"first or second\")\n>>> m.group(\"item\")\n'second'\n>>> m.captures(\"item\")\n['first', 'second']\n>>> # Again, both groups capture, the second capture 'overwriting' the first.\n>>> m = regex.match(r\"(?P<item>\\w*) or (?P<item>\\w*)\", \" or second\")\n>>> m.group(\"item\")\n'second'\n>>> m.captures(\"item\")\n['', 'second']\n>>> # And yet again, both groups capture, the second capture 'overwriting' the first.\n>>> m = regex.match(r\"(?P<item>\\w*) or (?P<item>\\w*)\", \"first or \")\n>>> m.group(\"item\")\n''\n>>> m.captures(\"item\")\n['first', '']\n\nAdded fullmatch (issue #16203)\nfullmatch behaves like match, except that it must match all of the string.\n>>> print(regex.fullmatch(r\"abc\", \"abc\").span())\n(0, 3)\n>>> print(regex.fullmatch(r\"abc\", \"abcx\"))\nNone\n>>> print(regex.fullmatch(r\"abc\", \"abcx\", endpos=3).span())\n(0, 3)\n>>> print(regex.fullmatch(r\"abc\", \"xabcy\", pos=1, endpos=4).span())\n(1, 4)\n>>>\n>>> regex.match(r\"a.*?\", \"abcd\").group(0)\n'a'\n>>> regex.fullmatch(r\"a.*?\", \"abcd\").group(0)\n'abcd'\n\nAdded subf and subfn\nsubf and subfn are alternatives to sub and subn respectively. When passed a replacement string, they treat it as a format string.\n>>> regex.subf(r\"(\\w+) (\\w+)\", \"{0} => {2} {1}\", \"foo bar\")\n'foo bar => bar foo'\n>>> regex.subf(r\"(?P<word1>\\w+) (?P<word2>\\w+)\", \"{word2} {word1}\", \"foo bar\")\n'bar foo'\n\nAdded expandf to match object\nexpandf is an alternative to expand. When passed a replacement string, it treats it as a format string.\n>>> m = regex.match(r\"(\\w+) (\\w+)\", \"foo bar\")\n>>> m.expandf(\"{0} => {2} {1}\")\n'foo bar => bar foo'\n>>>\n>>> m = regex.match(r\"(?P<word1>\\w+) (?P<word2>\\w+)\", \"foo bar\")\n>>> m.expandf(\"{word2} {word1}\")\n'bar foo'\n\nDetach searched string\nA match object contains a reference to the string that was searched, via its string attribute. The detach_string method will 'detach' that string, making it available for garbage collection, which might save valuable memory if that string is very large.\n>>> m = regex.search(r\"\\w+\", \"Hello world\")\n>>> print(m.group())\nHello\n>>> print(m.string)\nHello world\n>>> m.detach_string()\n>>> print(m.group())\nHello\n>>> print(m.string)\nNone\n\nRecursive patterns (Hg issue 27)\nRecursive and repeated patterns are supported.\n(?R) or (?0) tries to match the entire regex recursively. (?1), (?2), etc, try to match the relevant group.\n(?&name) tries to match the named group.\n>>> regex.match(r\"(Tarzan|Jane) loves (?1)\", \"Tarzan loves Jane\").groups()\n('Tarzan',)\n>>> regex.match(r\"(Tarzan|Jane) loves (?1)\", \"Jane loves Tarzan\").groups()\n('Jane',)\n\n>>> m = regex.search(r\"(\\w)(?:(?R)|(\\w?))\\1\", \"kayak\")\n>>> m.group(0, 1, 2)\n('kayak', 'k', None)\nThe first two examples show how the subpattern within the group is reused, but is _not_ itself a group. In other words, \"(Tarzan|Jane) loves (?1)\" is equivalent to \"(Tarzan|Jane) loves (?:Tarzan|Jane)\".\nIt's possible to backtrack into a recursed or repeated group.\nYou can't call a group if there is more than one group with that group name or group number (\"ambiguous group reference\").\nThe alternative forms (?P>name) and (?P&name) are also supported.\n\nFull Unicode case-folding is supported\nIn version 1 behaviour, the regex module uses full case-folding when performing case-insensitive matches in Unicode.\n>>> regex.match(r\"(?iV1)strasse\", \"stra\\N{LATIN SMALL LETTER SHARP S}e\").span()\n(0, 6)\n>>> regex.match(r\"(?iV1)stra\\N{LATIN SMALL LETTER SHARP S}e\", \"STRASSE\").span()\n(0, 7)\nIn version 0 behaviour, it uses simple case-folding for backward compatibility with the re module.\n\nApproximate \"fuzzy\" matching (Hg issue 12, Hg issue 41, Hg issue 109)\nRegex usually attempts an exact match, but sometimes an approximate, or \"fuzzy\", match is needed, for those cases where the text being searched may contain errors in the form of inserted, deleted or substituted characters.\nA fuzzy regex specifies which types of errors are permitted, and, optionally, either the minimum and maximum or only the maximum permitted number of each type. (You cannot specify only a minimum.)\nThe 3 types of error are:\n\nInsertion, indicated by \"i\"\nDeletion, indicated by \"d\"\nSubstitution, indicated by \"s\"\n\nIn addition, \"e\" indicates any type of error.\nThe fuzziness of a regex item is specified between \"{\" and \"}\" after the item.\nExamples:\n\nfoo match \"foo\" exactly\n(?:foo){i} match \"foo\", permitting insertions\n(?:foo){d} match \"foo\", permitting deletions\n(?:foo){s} match \"foo\", permitting substitutions\n(?:foo){i,s} match \"foo\", permitting insertions and substitutions\n(?:foo){e} match \"foo\", permitting errors\n\nIf a certain type of error is specified, then any type not specified will not be permitted.\nIn the following examples I'll omit the item and write only the fuzziness:\n\n{d<=3} permit at most 3 deletions, but no other types\n{i<=1,s<=2} permit at most 1 insertion and at most 2 substitutions, but no deletions\n{1<=e<=3} permit at least 1 and at most 3 errors\n{i<=2,d<=2,e<=3} permit at most 2 insertions, at most 2 deletions, at most 3 errors in total, but no substitutions\n\nIt's also possible to state the costs of each type of error and the maximum permitted total cost.\nExamples:\n\n{2i+2d+1s<=4} each insertion costs 2, each deletion costs 2, each substitution costs 1, the total cost must not exceed 4\n{i<=1,d<=1,s<=1,2i+2d+1s<=4} at most 1 insertion, at most 1 deletion, at most 1 substitution; each insertion costs 2, each deletion costs 2, each substitution costs 1, the total cost must not exceed 4\n\nYou can also use \"<\" instead of \"<=\" if you want an exclusive minimum or maximum.\nYou can add a test to perform on a character that's substituted or inserted.\nExamples:\n\n{s<=2:[a-z]} at most 2 substitutions, which must be in the character set [a-z].\n{s<=2,i<=3:\\d} at most 2 substitutions, at most 3 insertions, which must be digits.\n\nBy default, fuzzy matching searches for the first match that meets the given constraints. The ENHANCEMATCH flag will cause it to attempt to improve the fit (i.e. reduce the number of errors) of the match that it has found.\nThe BESTMATCH flag will make it search for the best match instead.\nFurther examples to note:\n\nregex.search(\"(dog){e}\", \"cat and dog\")[1] returns \"cat\" because that matches \"dog\" with 3 errors (an unlimited number of errors is permitted).\nregex.search(\"(dog){e<=1}\", \"cat and dog\")[1] returns \" dog\" (with a leading space) because that matches \"dog\" with 1 error, which is within the limit.\nregex.search(\"(?e)(dog){e<=1}\", \"cat and dog\")[1] returns \"dog\" (without a leading space) because the fuzzy search matches \" dog\" with 1 error, which is within the limit, and the (?e) then it attempts a better fit.\n\nIn the first two examples there are perfect matches later in the string, but in neither case is it the first possible match.\nThe match object has an attribute fuzzy_counts which gives the total number of substitutions, insertions and deletions.\n>>> # A 'raw' fuzzy match:\n>>> regex.fullmatch(r\"(?:cats|cat){e<=1}\", \"cat\").fuzzy_counts\n(0, 0, 1)\n>>> # 0 substitutions, 0 insertions, 1 deletion.\n\n>>> # A better match might be possible if the ENHANCEMATCH flag used:\n>>> regex.fullmatch(r\"(?e)(?:cats|cat){e<=1}\", \"cat\").fuzzy_counts\n(0, 0, 0)\n>>> # 0 substitutions, 0 insertions, 0 deletions.\nThe match object also has an attribute fuzzy_changes which gives a tuple of the positions of the substitutions, insertions and deletions.\n>>> m = regex.search('(fuu){i<=2,d<=2,e<=5}', 'anaconda foo bar')\n>>> m\n<regex.Match object; span=(7, 10), match='a f', fuzzy_counts=(0, 2, 2)>\n>>> m.fuzzy_changes\n([], [7, 8], [10, 11])\nWhat this means is that if the matched part of the string had been:\n'anacondfuuoo bar'\nit would've been an exact match.\nHowever, there were insertions at positions 7 and 8:\n'anaconda fuuoo bar'\n        ^^\nand deletions at positions 10 and 11:\n'anaconda f~~oo bar'\n           ^^\nSo the actual string was:\n'anaconda foo bar'\n\nNamed lists \\L<name> (Hg issue 11)\nThere are occasions where you may want to include a list (actually, a set) of options in a regex.\nOne way is to build the pattern like this:\n>>> p = regex.compile(r\"first|second|third|fourth|fifth\")\nbut if the list is large, parsing the resulting regex can take considerable time, and care must also be taken that the strings are properly escaped and properly ordered, for example, \"cats\" before \"cat\".\nThe new alternative is to use a named list:\n>>> option_set = [\"first\", \"second\", \"third\", \"fourth\", \"fifth\"]\n>>> p = regex.compile(r\"\\L<options>\", options=option_set)\nThe order of the items is irrelevant, they are treated as a set. The named lists are available as the .named_lists attribute of the pattern object :\n>>> print(p.named_lists)\n{'options': frozenset({'third', 'first', 'fifth', 'fourth', 'second'})}\nIf there are any unused keyword arguments, ValueError will be raised unless you tell it otherwise:\n>>> option_set = [\"first\", \"second\", \"third\", \"fourth\", \"fifth\"]\n>>> p = regex.compile(r\"\\L<options>\", options=option_set, other_options=[])\nTraceback (most recent call last):\n  File \"<stdin>\", line 1, in <module>\n  File \"C:\\Python310\\lib\\site-packages\\regex\\regex.py\", line 353, in compile\n    return _compile(pattern, flags, ignore_unused, kwargs, cache_pattern)\n  File \"C:\\Python310\\lib\\site-packages\\regex\\regex.py\", line 500, in _compile\n    complain_unused_args()\n  File \"C:\\Python310\\lib\\site-packages\\regex\\regex.py\", line 483, in complain_unused_args\n    raise ValueError('unused keyword argument {!a}'.format(any_one))\nValueError: unused keyword argument 'other_options'\n>>> p = regex.compile(r\"\\L<options>\", options=option_set, other_options=[], ignore_unused=True)\n>>> p = regex.compile(r\"\\L<options>\", options=option_set, other_options=[], ignore_unused=False)\nTraceback (most recent call last):\n  File \"<stdin>\", line 1, in <module>\n  File \"C:\\Python310\\lib\\site-packages\\regex\\regex.py\", line 353, in compile\n    return _compile(pattern, flags, ignore_unused, kwargs, cache_pattern)\n  File \"C:\\Python310\\lib\\site-packages\\regex\\regex.py\", line 500, in _compile\n    complain_unused_args()\n  File \"C:\\Python310\\lib\\site-packages\\regex\\regex.py\", line 483, in complain_unused_args\n    raise ValueError('unused keyword argument {!a}'.format(any_one))\nValueError: unused keyword argument 'other_options'\n>>>\n\nStart and end of word\n\\m matches at the start of a word.\n\\M matches at the end of a word.\nCompare with \\b, which matches at the start or end of a word.\n\nUnicode line separators\nNormally the only line separator is \\n (\\x0A), but if the WORD flag is turned on then the line separators are \\x0D\\x0A, \\x0A, \\x0B, \\x0C and \\x0D, plus \\x85, \\u2028 and \\u2029 when working with Unicode.\nThis affects the regex dot \".\", which, with the DOTALL flag turned off, matches any character except a line separator. It also affects the line anchors ^ and $ (in multiline mode).\n\nSet operators\nVersion 1 behaviour only\nSet operators have been added, and a set [...] can include nested sets.\nThe operators, in order of increasing precedence, are:\n\n|| for union (\"x||y\" means \"x or y\")\n~~ (double tilde) for symmetric difference (\"x~~y\" means \"x or y, but not both\")\n&& for intersection (\"x&&y\" means \"x and y\")\n-- (double dash) for difference (\"x--y\" means \"x but not y\")\n\nImplicit union, ie, simple juxtaposition like in [ab], has the highest precedence. Thus, [ab&&cd] is the same as [[a||b]&&[c||d]].\nExamples:\n\n[ab] # Set containing 'a' and 'b'\n[a-z] # Set containing 'a' .. 'z'\n[[a-z]--[qw]] # Set containing 'a' .. 'z', but not 'q' or 'w'\n[a-z--qw] # Same as above\n[\\p{L}--QW] # Set containing all letters except 'Q' and 'W'\n[\\p{N}--[0-9]] # Set containing all numbers except '0' .. '9'\n[\\p{ASCII}&&\\p{Letter}] # Set containing all characters which are ASCII and letter\n\n\nregex.escape (issue #2650)\nregex.escape has an additional keyword parameter special_only. When True, only 'special' regex characters, such as '?', are escaped.\n>>> regex.escape(\"foo!?\", special_only=False)\n'foo\\\\!\\\\?'\n>>> regex.escape(\"foo!?\", special_only=True)\n'foo!\\\\?'\n\nregex.escape (Hg issue 249)\nregex.escape has an additional keyword parameter literal_spaces. When True, spaces are not escaped.\n>>> regex.escape(\"foo bar!?\", literal_spaces=False)\n'foo\\\\ bar!\\\\?'\n>>> regex.escape(\"foo bar!?\", literal_spaces=True)\n'foo bar!\\\\?'\n\nRepeated captures (issue #7132)\nA match object has additional methods which return information on all the successful matches of a repeated group. These methods are:\n\nmatchobject.captures([group1, ...])\nReturns a list of the strings matched in a group or groups. Compare with matchobject.group([group1, ...]).\n\n\nmatchobject.starts([group])\nReturns a list of the start positions. Compare with matchobject.start([group]).\n\n\nmatchobject.ends([group])\nReturns a list of the end positions. Compare with matchobject.end([group]).\n\n\nmatchobject.spans([group])\nReturns a list of the spans. Compare with matchobject.span([group]).\n\n\n\n>>> m = regex.search(r\"(\\w{3})+\", \"123456789\")\n>>> m.group(1)\n'789'\n>>> m.captures(1)\n['123', '456', '789']\n>>> m.start(1)\n6\n>>> m.starts(1)\n[0, 3, 6]\n>>> m.end(1)\n9\n>>> m.ends(1)\n[3, 6, 9]\n>>> m.span(1)\n(6, 9)\n>>> m.spans(1)\n[(0, 3), (3, 6), (6, 9)]\n\nAtomic grouping (?>...) (issue #433030)\nIf the following pattern subsequently fails, then the subpattern as a whole will fail.\n\nPossessive quantifiers\n(?:...)?+ ; (?:...)*+ ; (?:...)++ ; (?:...){min,max}+\nThe subpattern is matched up to 'max' times. If the following pattern subsequently fails, then all the repeated subpatterns will fail as a whole. For example, (?:...)++ is equivalent to (?>(?:...)+).\n\nScoped flags (issue #433028)\n(?flags-flags:...)\nThe flags will apply only to the subpattern. Flags can be turned on or off.\n\nDefinition of 'word' character (issue #1693050)\nThe definition of a 'word' character has been expanded for Unicode. It conforms to the Unicode specification at http://www.unicode.org/reports/tr29/.\n\nVariable-length lookbehind\nA lookbehind can match a variable-length string.\n\nFlags argument for regex.split, regex.sub and regex.subn (issue #3482)\nregex.split, regex.sub and regex.subn support a 'flags' argument.\n\nPos and endpos arguments for regex.sub and regex.subn\nregex.sub and regex.subn support 'pos' and 'endpos' arguments.\n\n'Overlapped' argument for regex.findall and regex.finditer\nregex.findall and regex.finditer support an 'overlapped' flag which permits overlapped matches.\n\nSplititer\nregex.splititer has been added. It's a generator equivalent of regex.split.\n\nSubscripting match objects for groups\nA match object accepts access to the groups via subscripting and slicing:\n>>> m = regex.search(r\"(?P<before>.*?)(?P<num>\\d+)(?P<after>.*)\", \"pqr123stu\")\n>>> print(m[\"before\"])\npqr\n>>> print(len(m))\n4\n>>> print(m[:])\n('pqr123stu', 'pqr', '123', 'stu')\n\nNamed groups\nGroups can be named with (?<name>...) as well as the existing (?P<name>...).\n\nGroup references\nGroups can be referenced within a pattern with \\g<name>. This also allows there to be more than 99 groups.\n\nNamed characters \\N{name}\nNamed characters are supported. Note that only those known by Python's Unicode database will be recognised.\n\nUnicode codepoint properties, including scripts and blocks\n\\p{property=value}; \\P{property=value}; \\p{value} ; \\P{value}\nMany Unicode properties are supported, including blocks and scripts. \\p{property=value} or \\p{property:value} matches a character whose property property has value value. The inverse of \\p{property=value} is \\P{property=value} or \\p{^property=value}.\nIf the short form \\p{value} is used, the properties are checked in the order: General_Category, Script, Block, binary property:\n\nLatin, the 'Latin' script (Script=Latin).\nBasicLatin, the 'BasicLatin' block (Block=BasicLatin).\nAlphabetic, the 'Alphabetic' binary property (Alphabetic=Yes).\n\nA short form starting with Is indicates a script or binary property:\n\nIsLatin, the 'Latin' script (Script=Latin).\nIsAlphabetic, the 'Alphabetic' binary property (Alphabetic=Yes).\n\nA short form starting with In indicates a block property:\n\nInBasicLatin, the 'BasicLatin' block (Block=BasicLatin).\n\n\nPOSIX character classes\n[[:alpha:]]; [[:^alpha:]]\nPOSIX character classes are supported. These are normally treated as an alternative form of \\p{...}.\nThe exceptions are alnum, digit, punct and xdigit, whose definitions are different from those of Unicode.\n[[:alnum:]] is equivalent to \\p{posix_alnum}.\n[[:digit:]] is equivalent to \\p{posix_digit}.\n[[:punct:]] is equivalent to \\p{posix_punct}.\n[[:xdigit:]] is equivalent to \\p{posix_xdigit}.\n\nSearch anchor \\G\nA search anchor has been added. It matches at the position where each search started/continued and can be used for contiguous matches or in negative variable-length lookbehinds to limit how far back the lookbehind goes:\n>>> regex.findall(r\"\\w{2}\", \"abcd ef\")\n['ab', 'cd', 'ef']\n>>> regex.findall(r\"\\G\\w{2}\", \"abcd ef\")\n['ab', 'cd']\n\nThe search starts at position 0 and matches 'ab'.\nThe search continues at position 2 and matches 'cd'.\nThe search continues at position 4 and fails to match any letters.\nThe anchor stops the search start position from being advanced, so there are no more results.\n\n\nReverse searching\nSearches can also work backwards:\n>>> regex.findall(r\".\", \"abc\")\n['a', 'b', 'c']\n>>> regex.findall(r\"(?r).\", \"abc\")\n['c', 'b', 'a']\nNote that the result of a reverse search is not necessarily the reverse of a forward search:\n>>> regex.findall(r\"..\", \"abcde\")\n['ab', 'cd']\n>>> regex.findall(r\"(?r)..\", \"abcde\")\n['de', 'bc']\n\nMatching a single grapheme \\X\nThe grapheme matcher is supported. It conforms to the Unicode specification at http://www.unicode.org/reports/tr29/.\n\nBranch reset (?|...|...)\nGroup numbers will be reused across the alternatives, but groups with different names will have different group numbers.\n>>> regex.match(r\"(?|(first)|(second))\", \"first\").groups()\n('first',)\n>>> regex.match(r\"(?|(first)|(second))\", \"second\").groups()\n('second',)\nNote that there is only one group.\n\nDefault Unicode word boundary\nThe WORD flag changes the definition of a 'word boundary' to that of a default Unicode word boundary. This applies to \\b and \\B.\n\nTimeout\nThe matching methods and functions support timeouts. The timeout (in seconds) applies to the entire operation:\n>>> from time import sleep\n>>>\n>>> def fast_replace(m):\n...     return 'X'\n...\n>>> def slow_replace(m):\n...     sleep(0.5)\n...     return 'X'\n...\n>>> regex.sub(r'[a-z]', fast_replace, 'abcde', timeout=2)\n'XXXXX'\n>>> regex.sub(r'[a-z]', slow_replace, 'abcde', timeout=2)\nTraceback (most recent call last):\n  File \"<stdin>\", line 1, in <module>\n  File \"C:\\Python310\\lib\\site-packages\\regex\\regex.py\", line 278, in sub\n    return pat.sub(repl, string, count, pos, endpos, concurrent, timeout)\nTimeoutError: regex timed out\n\n\n", "description": "Alternative regular expression module to replace re."}, {"name": "referencing", "readme": "\n    \nAn implementation-agnostic implementation of JSON reference resolution.\nSee the documentation for more details.\n", "description": "JSON reference resolution."}, {"name": "rdflib", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nRDFLib\nRDFlib Family of packages\nVersions & Releases\nDocumentation\nInstallation\nInstallation of the current main branch (for developers)\nGetting Started\nFeatures\nRunning tests\nRunning the tests on the host\nRunning test coverage on the host with coverage report\nViewing test coverage\nContributing\nSupport & Contacts\n\n\n\n\n\nREADME.md\n\n\n\n\n\nRDFLib\n\n\n\n\n\n\n\n\n\n\n\nRDFLib is a pure Python package for working with RDF. RDFLib contains most things you need to work with RDF, including:\n\nparsers and serializers for RDF/XML, N3, NTriples, N-Quads, Turtle, TriX, Trig and JSON-LD\na Graph interface which can be backed by any one of a number of Store implementations\nstore implementations for in-memory, persistent on disk (Berkeley DB) and remote SPARQL endpoints\na SPARQL 1.1 implementation - supporting SPARQL 1.1 Queries and Update statements\nSPARQL function extension mechanisms\n\nRDFlib Family of packages\nThe RDFlib community maintains many RDF-related Python code repositories with different purposes. For example:\n\nrdflib - the RDFLib core\nsparqlwrapper - a simple Python wrapper around a SPARQL service to remotely execute your queries\npyLODE - An OWL ontology documentation tool using Python and templating, based on LODE.\npyrdfa3 - RDFa 1.1 distiller/parser library: can extract RDFa 1.1/1.0 from (X)HTML, SVG, or XML in general.\npymicrodata - A module to extract RDF from an HTML5 page annotated with microdata.\npySHACL - A pure Python module which allows for the validation of RDF graphs against SHACL graphs.\nOWL-RL - A simple implementation of the OWL2 RL Profile which expands the graph with all possible triples that OWL RL defines.\n\nPlease see the list for all packages/repositories here:\n\nhttps://github.com/RDFLib\n\nHelp with maintenance of all of the RDFLib family of packages is always welcome and appreciated.\nVersions & Releases\n\n7.1.0a0 current main branch.\n7.x.y current release, supports Python 3.8.1+ only.\n\nsee Releases\n\n\n6.x.y supports Python 3.7+ only. Many improvements over 5.0.0\n\nsee Releases\n\n\n5.x.y supports Python 2.7 and 3.4+ and is mostly backwards compatible with 4.2.2.\n\nSee https://rdflib.dev for the release overview.\nDocumentation\nSee https://rdflib.readthedocs.io for our documentation built from the code. Note that there are latest, stable 5.0.0 and 4.2.2 documentation versions, matching releases.\nInstallation\nThe stable release of RDFLib may be installed with Python's package management tool pip:\n$ pip install rdflib\n\nAlternatively manually download the package from the Python Package\nIndex (PyPI) at https://pypi.python.org/pypi/rdflib\nThe current version of RDFLib is 7.0.0, see the CHANGELOG.md file for what's new in this release.\nInstallation of the current main branch (for developers)\nWith pip you can also install rdflib from the git repository with one of the following options:\n$ pip install git+https://github.com/rdflib/rdflib@main\n\nor\n$ pip install -e git+https://github.com/rdflib/rdflib@main#egg=rdflib\n\nor from your locally cloned repository you can install it with one of the following options:\n$ poetry install  # installs into a poetry-managed venv\n\nor\n$ pip install -e .\n\nGetting Started\nRDFLib aims to be a pythonic RDF API. RDFLib's main data object is a Graph which is a Python collection\nof RDF Subject, Predicate, Object Triples:\nTo create graph and load it with RDF data from DBPedia then print the results:\nfrom rdflib import Graph\ng = Graph()\ng.parse('http://dbpedia.org/resource/Semantic_Web')\n\nfor s, p, o in g:\n    print(s, p, o)\nThe components of the triples are URIs (resources) or Literals\n(values).\nURIs are grouped together by namespace, common namespaces are included in RDFLib:\nfrom rdflib.namespace import DC, DCTERMS, DOAP, FOAF, SKOS, OWL, RDF, RDFS, VOID, XMLNS, XSD\nYou can use them like this:\nfrom rdflib import Graph, URIRef, Literal\nfrom rdflib.namespace import RDFS, XSD\n\ng = Graph()\nsemweb = URIRef('http://dbpedia.org/resource/Semantic_Web')\ntype = g.value(semweb, RDFS.label)\nWhere RDFS is the RDFS namespace, XSD the XML Schema Datatypes namespace and g.value returns an object of the triple-pattern given (or an arbitrary one if multiple exist).\nOr like this, adding a triple to a graph g:\ng.add((\n    URIRef(\"http://example.com/person/nick\"),\n    FOAF.givenName,\n    Literal(\"Nick\", datatype=XSD.string)\n))\nThe triple (in n-triples notation) <http://example.com/person/nick> <http://xmlns.com/foaf/0.1/givenName> \"Nick\"^^<http://www.w3.org/2001/XMLSchema#string> .\nis created where the property FOAF.givenName is the URI <http://xmlns.com/foaf/0.1/givenName> and XSD.string is the\nURI <http://www.w3.org/2001/XMLSchema#string>.\nYou can bind namespaces to prefixes to shorten the URIs for RDF/XML, Turtle, N3, TriG, TriX & JSON-LD serializations:\ng.bind(\"foaf\", FOAF)\ng.bind(\"xsd\", XSD)\nThis will allow the n-triples triple above to be serialised like this:\nprint(g.serialize(format=\"turtle\"))\nWith these results:\nPREFIX foaf: <http://xmlns.com/foaf/0.1/>\nPREFIX xsd: <http://www.w3.org/2001/XMLSchema#>\n\n<http://example.com/person/nick> foaf:givenName \"Nick\"^^xsd:string .\nNew Namespaces can also be defined:\ndbpedia = Namespace('http://dbpedia.org/ontology/')\n\nabstracts = list(x for x in g.objects(semweb, dbpedia['abstract']) if x.language=='en')\nSee also ./examples\nFeatures\nThe library contains parsers and serializers for RDF/XML, N3,\nNTriples, N-Quads, Turtle, TriX, JSON-LD, RDFa and Microdata.\nThe library presents a Graph interface which can be backed by\nany one of a number of Store implementations.\nThis core RDFLib package includes store implementations for\nin-memory storage and persistent storage on top of the Berkeley DB.\nA SPARQL 1.1 implementation is included - supporting SPARQL 1.1 Queries and Update statements.\nRDFLib is open source and is maintained on GitHub. RDFLib releases, current and previous\nare listed on PyPI\nMultiple other projects are contained within the RDFlib \"family\", see https://github.com/RDFLib/.\nRunning tests\nRunning the tests on the host\nRun the test suite with pytest.\npoetry install\npoetry run pytest\nRunning test coverage on the host with coverage report\nRun the test suite and generate a HTML coverage report with pytest and pytest-cov.\npoetry run pytest --cov\nViewing test coverage\nOnce tests have produced HTML output of the coverage report, view it by running:\npoetry run pytest --cov --cov-report term --cov-report html\npython -m http.server --directory=htmlcov\nContributing\nRDFLib survives and grows via user contributions!\nPlease read our contributing guide and developers guide to get started.\nPlease consider lodging Pull Requests here:\n\nhttps://github.com/RDFLib/rdflib/pulls\n\nTo get a development environment consider using Gitpod or Google Cloud Shell.\n\n\nYou can also raise issues here:\n\nhttps://github.com/RDFLib/rdflib/issues\n\nSupport & Contacts\nFor general \"how do I...\" queries, please use https://stackoverflow.com and tag your question with rdflib.\nExisting questions:\n\nhttps://stackoverflow.com/questions/tagged/rdflib\n\nIf you want to contact the rdflib maintainers, please do so via:\n\nthe rdflib-dev mailing list: https://groups.google.com/group/rdflib-dev\nthe chat, which is available at gitter or via matrix #RDFLib_rdflib:gitter.im\n\n\n\n", "description": "RDF library for semantic web and linked data."}, {"name": "rasterio", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nRasterio\nExample\nAPI Overview\nRasterio CLI\nRio Plugins\nInstallation\nSupport\nDevelopment and Testing\nDocumentation\nLicense\nAuthors\nChanges\nWho is Using Rasterio?\n\n\n\n\n\nREADME.rst\n\n\n\n\nRasterio\nRasterio reads and writes geospatial raster data.\n\n\n\nGeographic information systems use GeoTIFF and other formats to organize and\nstore gridded, or raster, datasets. Rasterio reads and writes these formats and\nprovides a Python API based on N-D arrays.\nRasterio 1.4 works with Python 3.9+, Numpy 1.21+, and GDAL 3.3+. Official\nbinary packages for Linux, macOS, and Windows with most built-in format\ndrivers plus HDF5, netCDF, and OpenJPEG2000 are available on PyPI.\nRead the documentation for more details: https://rasterio.readthedocs.io/.\n\nExample\nHere's an example of some basic features that Rasterio provides. Three bands\nare read from an image and averaged to produce something like a panchromatic\nband.  This new band is then written to a new single band TIFF.\nimport numpy as np\nimport rasterio\n\n# Read raster bands directly to Numpy arrays.\n#\nwith rasterio.open('tests/data/RGB.byte.tif') as src:\n    r, g, b = src.read()\n\n# Combine arrays in place. Expecting that the sum will\n# temporarily exceed the 8-bit integer range, initialize it as\n# a 64-bit float (the numpy default) array. Adding other\n# arrays to it in-place converts those arrays \"up\" and\n# preserves the type of the total array.\ntotal = np.zeros(r.shape)\n\nfor band in r, g, b:\n    total += band\n\ntotal /= 3\n\n# Write the product as a raster band to a new 8-bit file. For\n# the new file's profile, we start with the meta attributes of\n# the source file, but then change the band count to 1, set the\n# dtype to uint8, and specify LZW compression.\nprofile = src.profile\nprofile.update(dtype=rasterio.uint8, count=1, compress='lzw')\n\nwith rasterio.open('example-total.tif', 'w', **profile) as dst:\n    dst.write(total.astype(rasterio.uint8), 1)\nThe output:\n\n\nAPI Overview\nRasterio gives access to properties of a geospatial raster file.\nwith rasterio.open('tests/data/RGB.byte.tif') as src:\n    print(src.width, src.height)\n    print(src.crs)\n    print(src.transform)\n    print(src.count)\n    print(src.indexes)\n\n# Printed:\n# (791, 718)\n# {u'units': u'm', u'no_defs': True, u'ellps': u'WGS84', u'proj': u'utm', u'zone': 18}\n# Affine(300.0379266750948, 0.0, 101985.0,\n#        0.0, -300.041782729805, 2826915.0)\n# 3\n# [1, 2, 3]\nA rasterio dataset also provides methods for getting read/write windows (like\nextended array slices) given georeferenced coordinates.\nwith rasterio.open('tests/data/RGB.byte.tif') as src:\n    window = src.window(*src.bounds)\n    print(window)\n    print(src.read(window=window).shape)\n\n# Printed:\n# Window(col_off=0.0, row_off=0.0, width=791.0000000000002, height=718.0)\n# (3, 718, 791)\n\nRasterio CLI\nRasterio's command line interface, named \"rio\", is documented at cli.rst. Its rio\ninsp command opens the hood of any raster dataset so you can poke around\nusing Python.\n$ rio insp tests/data/RGB.byte.tif\nRasterio 0.10 Interactive Inspector (Python 3.4.1)\nType \"src.meta\", \"src.read(1)\", or \"help(src)\" for more information.\n>>> src.name\n'tests/data/RGB.byte.tif'\n>>> src.closed\nFalse\n>>> src.shape\n(718, 791)\n>>> src.crs\n{'init': 'epsg:32618'}\n>>> b, g, r = src.read()\n>>> b\nmasked_array(data =\n [[-- -- -- ..., -- -- --]\n [-- -- -- ..., -- -- --]\n [-- -- -- ..., -- -- --]\n ...,\n [-- -- -- ..., -- -- --]\n [-- -- -- ..., -- -- --]\n [-- -- -- ..., -- -- --]],\n             mask =\n [[ True  True  True ...,  True  True  True]\n [ True  True  True ...,  True  True  True]\n [ True  True  True ...,  True  True  True]\n ...,\n [ True  True  True ...,  True  True  True]\n [ True  True  True ...,  True  True  True]\n [ True  True  True ...,  True  True  True]],\n       fill_value = 0)\n\n>>> np.nanmin(b), np.nanmax(b), np.nanmean(b)\n(0, 255, 29.94772668847656)\n\nRio Plugins\nRio provides the ability to create subcommands using plugins.  See\ncli.rst\nfor more information on building plugins.\nSee the\nplugin registry\nfor a list of available plugins.\n\nInstallation\nSee docs/installation.rst\n\nSupport\nThe primary forum for questions about installation and usage of Rasterio is\nhttps://rasterio.groups.io/g/main. The authors and other users will answer\nquestions when they have expertise to share and time to explain. Please take\nthe time to craft a clear question and be patient about responses.\nPlease do not bring these questions to Rasterio's issue tracker, which we want\nto reserve for bug reports and other actionable issues.\n\nDevelopment and Testing\nSee CONTRIBUTING.rst.\n\nDocumentation\nSee docs/.\n\nLicense\nSee LICENSE.txt.\n\nAuthors\nThe rasterio project was begun at Mapbox and was transferred to the rasterio Github organization in October 2021.\nSee AUTHORS.txt.\n\nChanges\nSee CHANGES.txt.\n\nWho is Using Rasterio?\nSee here.\n\n\n", "description": "Raster input/output library built on GDAL."}, {"name": "rarfile", "readme": "\n\n\n\nREADME.rst\n\n\n\n\nrarfile - RAR archive reader for Python\nThis is Python module for RAR archive reading.\nThe interface follows the style of zipfile.\nLicensed under ISC license.\nFeatures:\n\nSupports both RAR3 and RAR5 format archives.\nSupports multi volume archives.\nSupports Unicode filenames.\nSupports password-protected archives.\nSupports archive and file comments.\nArchive parsing and non-compressed files are handled in pure Python code.\nCompressed files are extracted by executing external tool:\nunrar (preferred), unar or bsdtar.\nWorks with Python 3.6+.\n\nLinks:\n\nDocumentation\nDownloads\nGit repo\n\n\n\n", "description": "RAR archive extraction."}, {"name": "qrcode", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPure python QR Code generator\nWhat is a QR Code?\nUsage\nAdvanced Usage\nOther image factories\nSVG\nPure Python PNG\nStyled Image\nExamples\n\n\n\n\n\nREADME.rst\n\n\n\n\nPure python QR Code generator\nGenerate QR codes.\nA standard install uses pypng to generate PNG files and can also render QR\ncodes directly to the console. A standard install is just:\npip install qrcode\n\nFor more image functionality, install qrcode with the pil dependency so\nthat pillow is installed and can be used for generating images:\npip install \"qrcode[pil]\"\n\n\nWhat is a QR Code?\nA Quick Response code is a two-dimensional pictographic code used for its fast\nreadability and comparatively large storage capacity. The code consists of\nblack modules arranged in a square pattern on a white background. The\ninformation encoded can be made up of any kind of data (e.g., binary,\nalphanumeric, or Kanji symbols)\n\nUsage\nFrom the command line, use the installed qr script:\nqr \"Some text\" > test.png\n\nOr in Python, use the make shortcut function:\nimport qrcode\nimg = qrcode.make('Some data here')\ntype(img)  # qrcode.image.pil.PilImage\nimg.save(\"some_file.png\")\n\nAdvanced Usage\nFor more control, use the QRCode class. For example:\nimport qrcode\nqr = qrcode.QRCode(\n    version=1,\n    error_correction=qrcode.constants.ERROR_CORRECT_L,\n    box_size=10,\n    border=4,\n)\nqr.add_data('Some data')\nqr.make(fit=True)\n\nimg = qr.make_image(fill_color=\"black\", back_color=\"white\")\nThe version parameter is an integer from 1 to 40 that controls the size of\nthe QR Code (the smallest, version 1, is a 21x21 matrix).\nSet to None and use the fit parameter when making the code to determine\nthis automatically.\nfill_color and back_color can change the background and the painting\ncolor of the QR, when using the default image factory. Both parameters accept\nRGB color tuples.\nimg = qr.make_image(back_color=(255, 195, 235), fill_color=(55, 95, 35))\nThe error_correction parameter controls the error correction used for the\nQR Code. The following four constants are made available on the qrcode\npackage:\n\nERROR_CORRECT_L\nAbout 7% or less errors can be corrected.\nERROR_CORRECT_M (default)\nAbout 15% or less errors can be corrected.\nERROR_CORRECT_Q\nAbout 25% or less errors can be corrected.\nERROR_CORRECT_H.\nAbout 30% or less errors can be corrected.\n\nThe box_size parameter controls how many pixels each \"box\" of the QR code\nis.\nThe border parameter controls how many boxes thick the border should be\n(the default is 4, which is the minimum according to the specs).\n\nOther image factories\nYou can encode as SVG, or use a new pure Python image processor to encode to\nPNG images.\nThe Python examples below use the make shortcut. The same image_factory\nkeyword argument is a valid option for the QRCode class for more advanced\nusage.\n\nSVG\nYou can create the entire SVG or an SVG fragment. When building an entire SVG\nimage, you can use the factory that combines as a path (recommended, and\ndefault for the script) or a factory that creates a simple set of rectangles.\nFrom your command line:\nqr --factory=svg-path \"Some text\" > test.svg\nqr --factory=svg \"Some text\" > test.svg\nqr --factory=svg-fragment \"Some text\" > test.svg\n\nOr in Python:\nimport qrcode\nimport qrcode.image.svg\n\nif method == 'basic':\n    # Simple factory, just a set of rects.\n    factory = qrcode.image.svg.SvgImage\nelif method == 'fragment':\n    # Fragment factory (also just a set of rects)\n    factory = qrcode.image.svg.SvgFragmentImage\nelse:\n    # Combined path factory, fixes white space that may occur when zooming\n    factory = qrcode.image.svg.SvgPathImage\n\nimg = qrcode.make('Some data here', image_factory=factory)\nTwo other related factories are available that work the same, but also fill the\nbackground of the SVG with white:\nqrcode.image.svg.SvgFillImage\nqrcode.image.svg.SvgPathFillImage\n\nThe QRCode.make_image() method forwards additional keyword arguments to the\nunderlying ElementTree XML library. This helps to fine tune the root element of\nthe resulting SVG:\nimport qrcode\nqr = qrcode.QRCode(image_factory=qrcode.image.svg.SvgPathImage)\nqr.add_data('Some data')\nqr.make(fit=True)\n\nimg = qr.make_image(attrib={'class': 'some-css-class'})\nYou can convert the SVG image into strings using the to_string() method.\nAdditional keyword arguments are forwarded to ElementTrees tostring():\nimg.to_string(encoding='unicode')\n\nPure Python PNG\nIf Pillow is not installed, the default image factory will be a pure Python PNG\nencoder that uses pypng.\nYou can use the factory explicitly from your command line:\nqr --factory=png \"Some text\" > test.png\n\nOr in Python:\nimport qrcode\nfrom qrcode.image.pure import PyPNGImage\nimg = qrcode.make('Some data here', image_factory=PyPNGImage)\n\nStyled Image\nWorks only with versions >=7.2 (SVG styled images require 7.4).\nTo apply styles to the QRCode, use the StyledPilImage or one of the\nstandard SVG image factories. These accept an optional module_drawer\nparameter to control the shape of the QR Code.\nThese QR Codes are not guaranteed to work with all readers, so do some\nexperimentation and set the error correction to high (especially if embedding\nan image).\nOther PIL module drawers:\n\n\n\nFor SVGs, use SvgSquareDrawer, SvgCircleDrawer,\nSvgPathSquareDrawer, or SvgPathCircleDrawer.\nThese all accept a size_ratio argument which allows for \"gapped\" squares or\ncircles by reducing this less than the default of Decimal(1).\nThe StyledPilImage additionally accepts an optional color_mask\nparameter to change the colors of the QR Code, and an optional\nembeded_image_path to embed an image in the center of the code.\nOther color masks:\n\n\n\nHere is a code example to draw a QR code with rounded corners, radial gradient\nand an embedded image:\nimport qrcode\nfrom qrcode.image.styledpil import StyledPilImage\nfrom qrcode.image.styles.moduledrawers.pil import RoundedModuleDrawer\nfrom qrcode.image.styles.colormasks import RadialGradiantColorMask\n\nqr = qrcode.QRCode(error_correction=qrcode.constants.ERROR_CORRECT_L)\nqr.add_data('Some data')\n\nimg_1 = qr.make_image(image_factory=StyledPilImage, module_drawer=RoundedModuleDrawer())\nimg_2 = qr.make_image(image_factory=StyledPilImage, color_mask=RadialGradiantColorMask())\nimg_3 = qr.make_image(image_factory=StyledPilImage, embeded_image_path=\"/path/to/image.png\")\n\nExamples\nGet the text content from print_ascii:\nimport io\nimport qrcode\nqr = qrcode.QRCode()\nqr.add_data(\"Some text\")\nf = io.StringIO()\nqr.print_ascii(out=f)\nf.seek(0)\nprint(f.read())\nThe add_data method will append data to the current QR object. To add new data by replacing previous content in the same object, first use clear method:\nimport qrcode\nqr = qrcode.QRCode()\nqr.add_data('Some data')\nimg = qr.make_image()\nqr.clear()\nqr.add_data('New data')\nother_img = qr.make_image()\nPipe ascii output to text file in command line:\nqr --ascii \"Some data\" > \"test.txt\"\ncat test.txt\n\nAlternative to piping output to file to avoid PowerShell issues:\n# qr \"Some data\" > test.png\nqr --output=test.png \"Some data\"\n\n\n\n", "description": "Generate QR codes in Python."}, {"name": "pyzmq", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nPyZMQ: Python bindings for \u00d8MQ\n\u00d8MQ 3.x, 4.x\nDocumentation\nDownloading\nBuilding and installation\nOld versions\n\n\n\n\n\nREADME.md\n\n\n\n\nPyZMQ: Python bindings for \u00d8MQ\nThis package contains Python bindings for ZeroMQ.\n\u00d8MQ is a lightweight and fast messaging implementation.\nPyZMQ should work with any reasonable version of Python (\u2265 3.4),\nas well as Python 2.7 and 3.3, as well as PyPy.\nThe Cython backend used by CPython supports libzmq \u2265 2.1.4 (including 3.2.x and 4.x),\nbut the CFFI backend used by PyPy only supports libzmq \u2265 3.2.2 (including 4.x).\nFor a summary of changes to pyzmq, see our\nchangelog.\n\u00d8MQ 3.x, 4.x\nPyZMQ fully supports the 3.x and 4.x APIs of libzmq,\ndeveloped at zeromq/libzmq.\nNo code to change, no flags to pass,\njust build pyzmq against the latest and it should work.\nPyZMQ does not support the old libzmq 2 API on PyPy.\nDocumentation\nSee PyZMQ's Sphinx-generated\ndocumentation on Read the Docs for API\ndetails, and some notes on Python and Cython development. If you want to\nlearn about using \u00d8MQ in general, the excellent \u00d8MQ\nGuide is the place to start, which has a\nPython version of every example. We also have some information on our\nwiki.\nDownloading\nUnless you specifically want to develop PyZMQ, we recommend downloading\nthe PyZMQ source code or wheels from\nPyPI,\nor install with conda.\nYou can also get the latest source code from our GitHub repository, but\nbuilding from the repository will require that you install recent Cython.\nBuilding and installation\nFor more detail on building pyzmq, see our Wiki.\nWe build wheels for macOS, Windows, and Linux, so you can get a binary on those platforms with:\npip install pyzmq\n\nbut compiling from source with pip install pyzmq should work in most environments.\nEspecially on macOS, make sure you are using the latest pip (\u2265 8), or it may not find the right wheels.\nIf the wheel doesn't work for some reason, or you want to force pyzmq to be compiled\n(this is often preferable if you already have libzmq installed and configured the way you want it),\nyou can force installation with:\npip install --no-binary=:all: pyzmq\n\nWhen compiling pyzmq (e.g. installing with pip on Linux),\nit is generally recommended that zeromq be installed separately,\nvia homebrew, apt, yum, etc:\n# Debian-based\nsudo apt-get install libzmq3-dev\n\n# RHEL-based\nsudo yum install libzmq3-devel\n\nIf this is not available, pyzmq will try to build libzmq as a Python Extension,\nthough this is not guaranteed to work.\nBuilding pyzmq from the git repo (including release tags on GitHub) requires Cython.\nOld versions\npyzmq 16 drops support Python 2.6 and 3.2.\nIf you need to use one of those Python versions, you can pin your pyzmq version to before 16:\npip install 'pyzmq<16'\n\nFor libzmq 2.0.x, use 'pyzmq<2.1'\npyzmq-2.1.11 was the last version of pyzmq to support Python 2.5,\nand pyzmq \u2265 2.2.0 requires Python \u2265 2.6.\npyzmq-13.0.0 introduces PyPy support via CFFI, which only supports libzmq-3.2.2 and newer.\nPyZMQ releases \u2264 2.2.0 matched libzmq versioning, but this is no longer the case,\nstarting with PyZMQ 13.0.0 (it was the thirteenth release, so why not?).\nPyZMQ \u2265 13.0 follows semantic versioning conventions accounting only for PyZMQ itself.\n\n\n", "description": "Python bindings for ZeroMQ messaging library."}, {"name": "pyzbar", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npyzbar\nInstallation\nExample usage\nZBar versions\nQuality field\nBounding boxes and polygons\nWindows error message\nContributors\nLicense\n\n\n\n\n\nREADME.rst\n\n\n\n\npyzbar\n\n\n\n\n\n\nRead one-dimensional barcodes and QR codes from Python 2 and 3 using the\nzbar library.\n\nPure python\nWorks with PIL / Pillow images, OpenCV / imageio / numpy ndarrays, and raw bytes\nDecodes locations of barcodes\nNo dependencies, other than the zbar library itself\nTested on Python 2.7, and Python 3.5 to 3.10\n\nThe older zbar\npackage is stuck in Python 2.x-land.\nThe zbarlight package does not\nprovide support for Windows and depends upon Pillow.\n\nInstallation\nThe zbar DLLs are included with the Windows Python wheels.\nOn other operating systems, you will need to install the zbar shared\nlibrary.\nMac OS X:\nbrew install zbar\n\nLinux:\nsudo apt-get install libzbar0\n\nInstall this Python wrapper; use the second form to install dependencies of the\ncommand-line scripts:\npip install pyzbar\npip install pyzbar[scripts]\n\n\nExample usage\nThe decode function accepts instances of PIL.Image.\n>>> from pyzbar.pyzbar import decode\n>>> from PIL import Image\n>>> decode(Image.open('pyzbar/tests/code128.png'))\n[\n    Decoded(\n        data=b'Foramenifera', type='CODE128',\n        rect=Rect(left=37, top=550, width=324, height=76),\n        polygon=[\n            Point(x=37, y=551), Point(x=37, y=625), Point(x=361, y=626),\n            Point(x=361, y=550)\n        ],\n        orientation=\"UP\",\n        quality=77\n    )\n    Decoded(\n        data=b'Rana temporaria', type='CODE128',\n        rect=Rect(left=4, top=0, width=390, height=76),\n        polygon=[\n            Point(x=4, y=1), Point(x=4, y=75), Point(x=394, y=76),\n            Point(x=394, y=0)\n        ],\n        orientation=\"UP\",\n        quality=77\n    )\n]\n\nIt also accepts instances of numpy.ndarray, which might come from loading\nimages using OpenCV.\n>>> import cv2\n>>> decode(cv2.imread('pyzbar/tests/code128.png'))\n[\n    Decoded(\n        data=b'Foramenifera', type='CODE128',\n        rect=Rect(left=37, top=550, width=324, height=76),\n        polygon=[\n            Point(x=37, y=551), Point(x=37, y=625), Point(x=361, y=626),\n            Point(x=361, y=550)\n        ],\n        orientation=\"UP\",\n        quality=77\n    )\n    Decoded(\n        data=b'Rana temporaria', type='CODE128',\n        rect=Rect(left=4, top=0, width=390, height=76),\n        polygon=[\n            Point(x=4, y=1), Point(x=4, y=75), Point(x=394, y=76),\n            Point(x=394, y=0)\n        ],\n        orientation=\"UP\",\n        quality=77\n    )\n]\n\nYou can also provide a tuple (pixels, width, height), where the image data\nis eight bits-per-pixel.\n>>> image = cv2.imread('pyzbar/tests/code128.png')\n>>> height, width = image.shape[:2]\n\n>>> # 8 bpp by considering just the blue channel\n>>> decode((image[:, :, 0].astype('uint8').tobytes(), width, height))\n[\n    Decoded(\n        data=b'Foramenifera', type='CODE128',\n        rect=Rect(left=37, top=550, width=324, height=76),\n        polygon=[\n            Point(x=37, y=551), Point(x=37, y=625), Point(x=361, y=626),\n            Point(x=361, y=550)\n        ],\n        orientation=\"UP\",\n        quality=77\n    )\n    Decoded(\n        data=b'Rana temporaria', type='CODE128',\n        rect=Rect(left=4, top=0, width=390, height=76),\n        polygon=[\n            Point(x=4, y=1), Point(x=4, y=75), Point(x=394, y=76),\n            Point(x=394, y=0)\n        ],\n        orientation=\"UP\",\n        quality=77\n    )\n]\n\n>>> # 8 bpp by converting image to greyscale\n>>> grey = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n>>> decode((grey.tobytes(), width, height))\n[\n    Decoded(\n        data=b'Foramenifera', type='CODE128',\n        rect=Rect(left=37, top=550, width=324, height=76),\n        polygon=[\n            Point(x=37, y=551), Point(x=37, y=625), Point(x=361, y=626),\n            Point(x=361, y=550)\n        ],\n        orientation=\"UP\",\n        quality=77\n    )\n    Decoded(\n        data=b'Rana temporaria', type='CODE128',\n        rect=Rect(left=4, top=0, width=390, height=76),\n        polygon=[\n            Point(x=4, y=1), Point(x=4, y=75), Point(x=394, y=76),\n            Point(x=394, y=0)\n        ],\n        orientation=\"UP\",\n        quality=77\n    )\n]\n\n>>> # If you don't provide 8 bpp\n>>> decode((image.tobytes(), width, height))\nTraceback (most recent call last):\n  File \"<stdin>\", line 1, in <module>\n  File \"/Users/lawh/projects/pyzbar/pyzbar/pyzbar.py\", line 102, in decode\n    raise PyZbarError('Unsupported bits-per-pixel [{0}]'.format(bpp))\npyzbar.pyzbar_error.PyZbarError: Unsupported bits-per-pixel [24]\n\nThe default behaviour is to decode all symbol types. You can look for just your\nsymbol types\n>>> from pyzbar.pyzbar import ZBarSymbol\n>>> # Look for just qrcode\n>>> decode(Image.open('pyzbar/tests/qrcode.png'), symbols=[ZBarSymbol.QRCODE])\n[\n    Decoded(\n        data=b'Thalassiodracon', type='QRCODE',\n        rect=Rect(left=27, top=27, width=145, height=145),\n        polygon=[\n            Point(x=27, y=27), Point(x=27, y=172), Point(x=172, y=172),\n            Point(x=172, y=27)\n        ],\n        orientation=\"UP\",\n        quality=1\n    )\n]\n\n\n>>> # If we look for just code128, the qrcodes in the image will not be detected\n>>> decode(Image.open('pyzbar/tests/qrcode.png'), symbols=[ZBarSymbol.CODE128])\n[]\n\n\nZBar versions\nDevelopment of the original zbar stopped in 2012.\nDevelopment was started again in 2019 under a new project\nthat has added some new features, including support for decoding\nbarcode orientation. At the time of writing this new project does not produce Windows DLLs.\nThe zbar DLLs that are included with the Windows Python wheels are built from the original\nproject and so do not include support for decoding barcode orientation.\nIf you see orientation=None then your system has an older release of zbar that does\nnot support orientation.\n\nQuality field\nFrom\nzbar.h, the quality field is\n\n...an unscaled, relative quantity: larger values are better than smaller\nvalues, where \"large\" and \"small\" are application dependent. Expect the exact\ndefinition of this quantity to change as the metric is refined. currently,\nonly the ordered relationship between two values is defined and will remain\nstable in the future\n\nBounding boxes and polygons\nThe blue and pink boxes show rect and polygon, respectively, for\nbarcodes in pyzbar/tests/qrcode.png (see\nbounding_box_and_polygon.py).\n\n\n\n\nWindows error message\nIf you see an ugly ImportError when importing pyzbar on Windows\nyou will most likely need the Visual C++ Redistributable Packages for Visual\nStudio 2013.\nInstall vcredist_x64.exe if using 64-bit Python, vcredist_x86.exe if\nusing 32-bit Python.\n\nContributors\n\nAlex (@globophobe) - first implementation of barcode locations\nDmytro Ferens (@dferens) - barcode orientation\nIsmail Bento (@isman7) - support for images loaded using imageio\n@jaant - read barcodes containing null characters\n\n\nLicense\npyzbar is distributed under the MIT license (see LICENCE.txt).\nThe zbar shared library is distributed under the\nGNU Lesser General Public License, version 2.1\n\n\n", "description": "Read one-dimensional barcodes and QR codes with zbar library."}, {"name": "PyYAML", "readme": "\nYAML is a data serialization format designed for human readability\nand interaction with scripting languages.  PyYAML is a YAML parser\nand emitter for Python.\nPyYAML features a complete YAML 1.1 parser, Unicode support, pickle\nsupport, capable extension API, and sensible error messages.  PyYAML\nsupports standard YAML tags and provides Python-specific tags that\nallow to represent an arbitrary Python object.\nPyYAML is applicable for a broad range of tasks from complex\nconfiguration files to object serialization and persistence.\n", "description": "YAML parser and emitter.", "category": "Serialization/data exchange"}, {"name": "pyxlsb", "readme": "\n\n\n\n\n\n\n\n\n\n\n\npyxlsb\nInstall\nUsage\n\n\n\n\n\nREADME.rst\n\n\n\n\npyxlsb\n\n\npyxlsb is an Excel 2007-2010 Binary Workbook (xlsb) parser for\nPython. The library is currently extremely limited, but functional\nenough for basic data extraction.\n\nInstall\npip install pyxlsb\n\nUsage\nThe module exposes an open_workbook(name) method (similar to Xlrd\nand OpenPyXl) for opening XLSB files. The Workbook object representing\nthe file is returned.\nfrom pyxlsb import open_workbook\nwith open_workbook('Book1.xlsb') as wb:\n    # Do stuff with wb\nThe Workbook object exposes a get_sheet(idx) method for retrieving a\nWorksheet instance.\n# Using the sheet index (1-based)\nwith wb.get_sheet(1) as sheet:\n    # Do stuff with sheet\n\n# Using the sheet name\nwith wb.get_sheet('Sheet1') as sheet:\n    # Do stuff with sheet\nTip: A sheets property containing the sheet names is available on\nthe Workbook instance.\nThe rows() method will hand out an iterator to read the worksheet\nrows.\n# You can use .rows(sparse=True) to skip empty rows\nfor row in sheet.rows():\n    print(row)\n# [Cell(r=0, c=0, v='TEXT'), Cell(r=0, c=1, v=42.1337)]\nDo note that dates will appear as floats. You must use the\nconvert_date(date) method from the pyxlsb module to turn them\ninto datetime instances.\nfrom pyxlsb import convert_date\nprint(convert_date(41235.45578))\n# datetime.datetime(2012, 11, 22, 10, 56, 19)\n\n\n", "description": "Parse Excel 2007-2010 Binary Workbook (xlsb) files."}, {"name": "PyWavelets", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nPyWavelets\nWhat is PyWavelets\nDocumentation\nInstallation\nState of development & Contributing\nContact\nLicense\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\nService\nMaster branch\n\n\n\nTravis\n\n\nAppveyor\n\n\nRead the Docs\n\n\n\n\n\nPyWavelets\n\nContents\n\nPyWavelets\nWhat is PyWavelets\nDocumentation\nInstallation\nState of development & Contributing\nContact\nLicense\n\n\n\n\n\nWhat is PyWavelets\nPyWavelets is a free Open Source library for wavelet transforms in Python.\nWavelets are mathematical basis functions that are localized in both time and\nfrequency.  Wavelet transforms are time-frequency transforms employing\nwavelets.  They are similar to Fourier transforms, the difference being that\nFourier transforms are localized only in frequency instead of in time and\nfrequency.\nThe main features of PyWavelets are:\n\n\n1D, 2D and nD Forward and Inverse Discrete Wavelet Transform (DWT and IDWT)\n1D, 2D and nD Multilevel DWT and IDWT\n1D and 2D Stationary Wavelet Transform (Undecimated Wavelet Transform)\n1D and 2D Wavelet Packet decomposition and reconstruction\n1D Continuous Wavelet Transform\nComputing Approximations of wavelet and scaling functions\nOver 100 built-in wavelet filters and support for custom wavelets\nSingle and double precision calculations\nReal and complex calculations\nResults compatible with Matlab Wavelet Toolbox (TM)\n\n\n\nDocumentation\nDocumentation with detailed examples and links to more resources is available\nonline at http://pywavelets.readthedocs.org.\nFor more usage examples see the demo directory in the source package.\n\nInstallation\nPyWavelets supports Python >=3.7, and is only dependent on NumPy\n(supported versions are currently >= 1.14.6). To pass all of the tests,\nMatplotlib is also required. SciPy is also an optional dependency. When\npresent, FFT-based continuous wavelet transforms will use FFTs from SciPy\nrather than NumPy.\nThere are binary wheels for Intel Linux, Windows and macOS / OSX on PyPi.  If\nyou are on one of these platforms, you should get a binary (precompiled)\ninstallation with:\npip install PyWavelets\n\nUsers of the Anaconda Python distribution may wish to obtain pre-built\nWindows, Intel Linux or macOS / OSX binaries from the conda-forge channel.\nThis can be done via:\nconda install -c conda-forge pywavelets\n\nSeveral Linux distributions have their own packages for PyWavelets, but these\ntend to be moderately out of date.  Query your Linux package manager tool for\npython-pywavelets, python-wavelets, python-pywt or a similar\npackage name.\nIf you want or need to install from source, you will need a working C compiler\n(any common one will work) and a recent version of Cython.  Navigate to the\nPyWavelets source code directory (containing pyproject.toml) and type:\npip install .\n\nThe most recent development version can be found on GitHub at\nhttps://github.com/PyWavelets/pywt.\nThe latest release, including source and binary packages for Intel Linux,\nmacOS and Windows, is available for download from the Python Package Index.\nYou can find source releases at the Releases Page.\n\nState of development & Contributing\nPyWavelets started in 2006 as an academic project for a master thesis\non Analysis and Classification of Medical Signals using Wavelet Transforms\nand was maintained until 2012 by its original developer.  In 2013\nmaintenance was taken over in a new repo)\nby a larger development team - a move supported by the original developer.\nThe repo move doesn't mean that this is a fork - the package continues to be\ndeveloped under the name \"PyWavelets\", and released on PyPi and Github (see\nthis issue for the discussion\nwhere that was decided).\nAll contributions including bug reports, bug fixes, new feature implementations\nand documentation improvements are welcome.  Moreover, developers with an\ninterest in PyWavelets are very welcome to join the development team!\nAs of 2019, PyWavelets development is supported in part by Tidelift.\nHelp support PyWavelets with the Tidelift Subscription\n\nContact\nUse GitHub Issues or the mailing list to post your comments or questions.\nReport a security vulnerability: https://tidelift.com/security\n\nLicense\nPyWavelets is a free Open Source software released under the MIT license.\nIf you wish to cite PyWavelets in a publication, please use the following\nJOSS publication.\n\n\nSpecific releases can also be cited via Zenodo. The DOI below will correspond\nto the most recent release. DOIs for past versions can be found by following\nthe link in the badge below to Zenodo:\n\n\n\n\n", "description": "Wavelet transforms library for Python."}, {"name": "pytz", "readme": "\n\nAuthor:\nStuart Bishop <stuart@stuartbishop.net>\n\n\nIntroduction\npytz brings the Olson tz database into Python. This library allows\naccurate and cross platform timezone calculations using Python 2.4\nor higher. It also solves the issue of ambiguous times at the end\nof daylight saving time, which you can read more about in the Python\nLibrary Reference (datetime.tzinfo).\nAlmost all of the Olson timezones are supported.\n\nNote\nProjects using Python 3.9 or later should be using the support\nnow included as part of the standard library, and third party\npackages work with it such as tzdata.\npytz offers no advantages beyond backwards compatibility with\ncode written for earlier versions of Python.\n\n\nNote\nThis library differs from the documented Python API for\ntzinfo implementations; if you want to create local wallclock\ntimes you need to use the localize() method documented in this\ndocument. In addition, if you perform date arithmetic on local\ntimes that cross DST boundaries, the result may be in an incorrect\ntimezone (ie. subtract 1 minute from 2002-10-27 1:00 EST and you get\n2002-10-27 0:59 EST instead of the correct 2002-10-27 1:59 EDT). A\nnormalize() method is provided to correct this. Unfortunately these\nissues cannot be resolved without modifying the Python datetime\nimplementation (see PEP-431).\n\n\n\nInstallation\nThis package can either be installed using pip or from a tarball using the\nstandard Python distutils.\nIf you are installing using pip, you don\u2019t need to download anything as the\nlatest version will be downloaded for you from PyPI:\npip install pytz\nIf you are installing from a tarball, run the following command as an\nadministrative user:\npython setup.py install\n\n\npytz for Enterprise\nAvailable as part of the Tidelift Subscription.\nThe maintainers of pytz and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. Learn more..\n\n\nExample & Usage\n\nLocalized times and date arithmetic\n>>> from datetime import datetime, timedelta\n>>> from pytz import timezone\n>>> import pytz\n>>> utc = pytz.utc\n>>> utc.zone\n'UTC'\n>>> eastern = timezone('US/Eastern')\n>>> eastern.zone\n'US/Eastern'\n>>> amsterdam = timezone('Europe/Amsterdam')\n>>> fmt = '%Y-%m-%d %H:%M:%S %Z%z'\n\nThis library only supports two ways of building a localized time. The\nfirst is to use the localize() method provided by the pytz library.\nThis is used to localize a naive datetime (datetime with no timezone\ninformation):\n>>> loc_dt = eastern.localize(datetime(2002, 10, 27, 6, 0, 0))\n>>> print(loc_dt.strftime(fmt))\n2002-10-27 06:00:00 EST-0500\n\nThe second way of building a localized time is by converting an existing\nlocalized time using the standard astimezone() method:\n>>> ams_dt = loc_dt.astimezone(amsterdam)\n>>> ams_dt.strftime(fmt)\n'2002-10-27 12:00:00 CET+0100'\n\nUnfortunately using the tzinfo argument of the standard datetime\nconstructors \u2018\u2019does not work\u2019\u2019 with pytz for many timezones.\n>>> datetime(2002, 10, 27, 12, 0, 0, tzinfo=amsterdam).strftime(fmt)  # /!\\ Does not work this way!\n'2002-10-27 12:00:00 LMT+0018'\n\nIt is safe for timezones without daylight saving transitions though, such\nas UTC:\n>>> datetime(2002, 10, 27, 12, 0, 0, tzinfo=pytz.utc).strftime(fmt)  # /!\\ Not recommended except for UTC\n'2002-10-27 12:00:00 UTC+0000'\n\nThe preferred way of dealing with times is to always work in UTC,\nconverting to localtime only when generating output to be read\nby humans.\n>>> utc_dt = datetime(2002, 10, 27, 6, 0, 0, tzinfo=utc)\n>>> loc_dt = utc_dt.astimezone(eastern)\n>>> loc_dt.strftime(fmt)\n'2002-10-27 01:00:00 EST-0500'\n\nThis library also allows you to do date arithmetic using local\ntimes, although it is more complicated than working in UTC as you\nneed to use the normalize() method to handle daylight saving time\nand other timezone transitions. In this example, loc_dt is set\nto the instant when daylight saving time ends in the US/Eastern\ntimezone.\n>>> before = loc_dt - timedelta(minutes=10)\n>>> before.strftime(fmt)\n'2002-10-27 00:50:00 EST-0500'\n>>> eastern.normalize(before).strftime(fmt)\n'2002-10-27 01:50:00 EDT-0400'\n>>> after = eastern.normalize(before + timedelta(minutes=20))\n>>> after.strftime(fmt)\n'2002-10-27 01:10:00 EST-0500'\n\nCreating local times is also tricky, and the reason why working with\nlocal times is not recommended. Unfortunately, you cannot just pass\na tzinfo argument when constructing a datetime (see the next\nsection for more details)\n>>> dt = datetime(2002, 10, 27, 1, 30, 0)\n>>> dt1 = eastern.localize(dt, is_dst=True)\n>>> dt1.strftime(fmt)\n'2002-10-27 01:30:00 EDT-0400'\n>>> dt2 = eastern.localize(dt, is_dst=False)\n>>> dt2.strftime(fmt)\n'2002-10-27 01:30:00 EST-0500'\n\nConverting between timezones is more easily done, using the\nstandard astimezone method.\n>>> utc_dt = datetime.fromtimestamp(1143408899, tz=utc)\n>>> utc_dt.strftime(fmt)\n'2006-03-26 21:34:59 UTC+0000'\n>>> au_tz = timezone('Australia/Sydney')\n>>> au_dt = utc_dt.astimezone(au_tz)\n>>> au_dt.strftime(fmt)\n'2006-03-27 08:34:59 AEDT+1100'\n>>> utc_dt2 = au_dt.astimezone(utc)\n>>> utc_dt2.strftime(fmt)\n'2006-03-26 21:34:59 UTC+0000'\n>>> utc_dt == utc_dt2\nTrue\n\nYou can take shortcuts when dealing with the UTC side of timezone\nconversions. normalize() and localize() are not really\nnecessary when there are no daylight saving time transitions to\ndeal with.\n>>> utc_dt = datetime.fromtimestamp(1143408899, tz=utc)\n>>> utc_dt.strftime(fmt)\n'2006-03-26 21:34:59 UTC+0000'\n>>> au_tz = timezone('Australia/Sydney')\n>>> au_dt = au_tz.normalize(utc_dt.astimezone(au_tz))\n>>> au_dt.strftime(fmt)\n'2006-03-27 08:34:59 AEDT+1100'\n>>> utc_dt2 = au_dt.astimezone(utc)\n>>> utc_dt2.strftime(fmt)\n'2006-03-26 21:34:59 UTC+0000'\n\n\n\ntzinfo API\nThe tzinfo instances returned by the timezone() function have\nbeen extended to cope with ambiguous times by adding an is_dst\nparameter to the utcoffset(), dst() && tzname() methods.\n>>> tz = timezone('America/St_Johns')\n\n>>> normal = datetime(2009, 9, 1)\n>>> ambiguous = datetime(2009, 10, 31, 23, 30)\n\nThe is_dst parameter is ignored for most timestamps. It is only used\nduring DST transition ambiguous periods to resolve that ambiguity.\n>>> print(tz.utcoffset(normal, is_dst=True))\n-1 day, 21:30:00\n>>> print(tz.dst(normal, is_dst=True))\n1:00:00\n>>> tz.tzname(normal, is_dst=True)\n'NDT'\n\n>>> print(tz.utcoffset(ambiguous, is_dst=True))\n-1 day, 21:30:00\n>>> print(tz.dst(ambiguous, is_dst=True))\n1:00:00\n>>> tz.tzname(ambiguous, is_dst=True)\n'NDT'\n\n>>> print(tz.utcoffset(normal, is_dst=False))\n-1 day, 21:30:00\n>>> tz.dst(normal, is_dst=False).seconds\n3600\n>>> tz.tzname(normal, is_dst=False)\n'NDT'\n\n>>> print(tz.utcoffset(ambiguous, is_dst=False))\n-1 day, 20:30:00\n>>> tz.dst(ambiguous, is_dst=False)\ndatetime.timedelta(0)\n>>> tz.tzname(ambiguous, is_dst=False)\n'NST'\n\nIf is_dst is not specified, ambiguous timestamps will raise\nan pytz.exceptions.AmbiguousTimeError exception.\n>>> print(tz.utcoffset(normal))\n-1 day, 21:30:00\n>>> print(tz.dst(normal))\n1:00:00\n>>> tz.tzname(normal)\n'NDT'\n\n>>> import pytz.exceptions\n>>> try:\n...     tz.utcoffset(ambiguous)\n... except pytz.exceptions.AmbiguousTimeError:\n...     print('pytz.exceptions.AmbiguousTimeError: %s' % ambiguous)\npytz.exceptions.AmbiguousTimeError: 2009-10-31 23:30:00\n>>> try:\n...     tz.dst(ambiguous)\n... except pytz.exceptions.AmbiguousTimeError:\n...     print('pytz.exceptions.AmbiguousTimeError: %s' % ambiguous)\npytz.exceptions.AmbiguousTimeError: 2009-10-31 23:30:00\n>>> try:\n...     tz.tzname(ambiguous)\n... except pytz.exceptions.AmbiguousTimeError:\n...     print('pytz.exceptions.AmbiguousTimeError: %s' % ambiguous)\npytz.exceptions.AmbiguousTimeError: 2009-10-31 23:30:00\n\n\n\n\nProblems with Localtime\nThe major problem we have to deal with is that certain datetimes\nmay occur twice in a year. For example, in the US/Eastern timezone\non the last Sunday morning in October, the following sequence\nhappens:\n\n\n01:00 EDT occurs\n1 hour later, instead of 2:00am the clock is turned back 1 hour\nand 01:00 happens again (this time 01:00 EST)\n\n\nIn fact, every instant between 01:00 and 02:00 occurs twice. This means\nthat if you try and create a time in the \u2018US/Eastern\u2019 timezone\nthe standard datetime syntax, there is no way to specify if you meant\nbefore of after the end-of-daylight-saving-time transition. Using the\npytz custom syntax, the best you can do is make an educated guess:\n>>> loc_dt = eastern.localize(datetime(2002, 10, 27, 1, 30, 00))\n>>> loc_dt.strftime(fmt)\n'2002-10-27 01:30:00 EST-0500'\n\nAs you can see, the system has chosen one for you and there is a 50%\nchance of it being out by one hour. For some applications, this does\nnot matter. However, if you are trying to schedule meetings with people\nin different timezones or analyze log files it is not acceptable.\nThe best and simplest solution is to stick with using UTC.  The pytz\npackage encourages using UTC for internal timezone representation by\nincluding a special UTC implementation based on the standard Python\nreference implementation in the Python documentation.\nThe UTC timezone unpickles to be the same instance, and pickles to a\nsmaller size than other pytz tzinfo instances.  The UTC implementation\ncan be obtained as pytz.utc, pytz.UTC, or pytz.timezone(\u2018UTC\u2019).\n>>> import pickle, pytz\n>>> dt = datetime(2005, 3, 1, 14, 13, 21, tzinfo=utc)\n>>> naive = dt.replace(tzinfo=None)\n>>> p = pickle.dumps(dt, 1)\n>>> naive_p = pickle.dumps(naive, 1)\n>>> len(p) - len(naive_p)\n17\n>>> new = pickle.loads(p)\n>>> new == dt\nTrue\n>>> new is dt\nFalse\n>>> new.tzinfo is dt.tzinfo\nTrue\n>>> pytz.utc is pytz.UTC is pytz.timezone('UTC')\nTrue\n\nNote that some other timezones are commonly thought of as the same (GMT,\nGreenwich, Universal, etc.). The definition of UTC is distinct from these\nother timezones, and they are not equivalent. For this reason, they will\nnot compare the same in Python.\n>>> utc == pytz.timezone('GMT')\nFalse\n\nSee the section What is UTC, below.\nIf you insist on working with local times, this library provides a\nfacility for constructing them unambiguously:\n>>> loc_dt = datetime(2002, 10, 27, 1, 30, 00)\n>>> est_dt = eastern.localize(loc_dt, is_dst=True)\n>>> edt_dt = eastern.localize(loc_dt, is_dst=False)\n>>> print(est_dt.strftime(fmt) + ' / ' + edt_dt.strftime(fmt))\n2002-10-27 01:30:00 EDT-0400 / 2002-10-27 01:30:00 EST-0500\n\nIf you pass None as the is_dst flag to localize(), pytz will refuse to\nguess and raise exceptions if you try to build ambiguous or non-existent\ntimes.\nFor example, 1:30am on 27th Oct 2002 happened twice in the US/Eastern\ntimezone when the clocks where put back at the end of Daylight Saving\nTime:\n>>> dt = datetime(2002, 10, 27, 1, 30, 00)\n>>> try:\n...     eastern.localize(dt, is_dst=None)\n... except pytz.exceptions.AmbiguousTimeError:\n...     print('pytz.exceptions.AmbiguousTimeError: %s' % dt)\npytz.exceptions.AmbiguousTimeError: 2002-10-27 01:30:00\n\nSimilarly, 2:30am on 7th April 2002 never happened at all in the\nUS/Eastern timezone, as the clocks where put forward at 2:00am skipping\nthe entire hour:\n>>> dt = datetime(2002, 4, 7, 2, 30, 00)\n>>> try:\n...     eastern.localize(dt, is_dst=None)\n... except pytz.exceptions.NonExistentTimeError:\n...     print('pytz.exceptions.NonExistentTimeError: %s' % dt)\npytz.exceptions.NonExistentTimeError: 2002-04-07 02:30:00\n\nBoth of these exceptions share a common base class to make error handling\neasier:\n>>> isinstance(pytz.AmbiguousTimeError(), pytz.InvalidTimeError)\nTrue\n>>> isinstance(pytz.NonExistentTimeError(), pytz.InvalidTimeError)\nTrue\n\nA special case is where countries change their timezone definitions\nwith no daylight savings time switch. For example, in 1915 Warsaw\nswitched from Warsaw time to Central European time with no daylight savings\ntransition. So at the stroke of midnight on August 5th 1915 the clocks\nwere wound back 24 minutes creating an ambiguous time period that cannot\nbe specified without referring to the timezone abbreviation or the\nactual UTC offset. In this case midnight happened twice, neither time\nduring a daylight saving time period. pytz handles this transition by\ntreating the ambiguous period before the switch as daylight savings\ntime, and the ambiguous period after as standard time.\n>>> warsaw = pytz.timezone('Europe/Warsaw')\n>>> amb_dt1 = warsaw.localize(datetime(1915, 8, 4, 23, 59, 59), is_dst=True)\n>>> amb_dt1.strftime(fmt)\n'1915-08-04 23:59:59 WMT+0124'\n>>> amb_dt2 = warsaw.localize(datetime(1915, 8, 4, 23, 59, 59), is_dst=False)\n>>> amb_dt2.strftime(fmt)\n'1915-08-04 23:59:59 CET+0100'\n>>> switch_dt = warsaw.localize(datetime(1915, 8, 5, 00, 00, 00), is_dst=False)\n>>> switch_dt.strftime(fmt)\n'1915-08-05 00:00:00 CET+0100'\n>>> str(switch_dt - amb_dt1)\n'0:24:01'\n>>> str(switch_dt - amb_dt2)\n'0:00:01'\n\nThe best way of creating a time during an ambiguous time period is\nby converting from another timezone such as UTC:\n>>> utc_dt = datetime(1915, 8, 4, 22, 36, tzinfo=pytz.utc)\n>>> utc_dt.astimezone(warsaw).strftime(fmt)\n'1915-08-04 23:36:00 CET+0100'\n\nThe standard Python way of handling all these ambiguities is not to\nhandle them, such as demonstrated in this example using the US/Eastern\ntimezone definition from the Python documentation (Note that this\nimplementation only works for dates between 1987 and 2006 - it is\nincluded for tests only!):\n>>> from pytz.reference import Eastern # pytz.reference only for tests\n>>> dt = datetime(2002, 10, 27, 0, 30, tzinfo=Eastern)\n>>> str(dt)\n'2002-10-27 00:30:00-04:00'\n>>> str(dt + timedelta(hours=1))\n'2002-10-27 01:30:00-05:00'\n>>> str(dt + timedelta(hours=2))\n'2002-10-27 02:30:00-05:00'\n>>> str(dt + timedelta(hours=3))\n'2002-10-27 03:30:00-05:00'\n\nNotice the first two results? At first glance you might think they are\ncorrect, but taking the UTC offset into account you find that they are\nactually two hours appart instead of the 1 hour we asked for.\n>>> from pytz.reference import UTC # pytz.reference only for tests\n>>> str(dt.astimezone(UTC))\n'2002-10-27 04:30:00+00:00'\n>>> str((dt + timedelta(hours=1)).astimezone(UTC))\n'2002-10-27 06:30:00+00:00'\n\n\n\nCountry Information\nA mechanism is provided to access the timezones commonly in use\nfor a particular country, looked up using the ISO 3166 country code.\nIt returns a list of strings that can be used to retrieve the relevant\ntzinfo instance using pytz.timezone():\n>>> print(' '.join(pytz.country_timezones['nz']))\nPacific/Auckland Pacific/Chatham\n\nThe Olson database comes with a ISO 3166 country code to English country\nname mapping that pytz exposes as a dictionary:\n>>> print(pytz.country_names['nz'])\nNew Zealand\n\n\n\nWhat is UTC\n\u2018UTC\u2019 is Coordinated Universal Time. It is a successor to, but distinct\nfrom, Greenwich Mean Time (GMT) and the various definitions of Universal\nTime. UTC is now the worldwide standard for regulating clocks and time\nmeasurement.\nAll other timezones are defined relative to UTC, and include offsets like\nUTC+0800 - hours to add or subtract from UTC to derive the local time. No\ndaylight saving time occurs in UTC, making it a useful timezone to perform\ndate arithmetic without worrying about the confusion and ambiguities caused\nby daylight saving time transitions, your country changing its timezone, or\nmobile computers that roam through multiple timezones.\n\n\nHelpers\nThere are two lists of timezones provided.\nall_timezones is the exhaustive list of the timezone names that can\nbe used.\n>>> from pytz import all_timezones\n>>> len(all_timezones) >= 500\nTrue\n>>> 'Etc/Greenwich' in all_timezones\nTrue\n\ncommon_timezones is a list of useful, current timezones. It doesn\u2019t\ncontain deprecated zones or historical zones, except for a few I\u2019ve\ndeemed in common usage, such as US/Eastern (open a bug report if you\nthink other timezones are deserving of being included here). It is also\na sequence of strings.\n>>> from pytz import common_timezones\n>>> len(common_timezones) < len(all_timezones)\nTrue\n>>> 'Etc/Greenwich' in common_timezones\nFalse\n>>> 'Australia/Melbourne' in common_timezones\nTrue\n>>> 'US/Eastern' in common_timezones\nTrue\n>>> 'Canada/Eastern' in common_timezones\nTrue\n>>> 'Australia/Yancowinna' in all_timezones\nTrue\n>>> 'Australia/Yancowinna' in common_timezones\nFalse\n\nBoth common_timezones and all_timezones are alphabetically\nsorted:\n>>> common_timezones_dupe = common_timezones[:]\n>>> common_timezones_dupe.sort()\n>>> common_timezones == common_timezones_dupe\nTrue\n>>> all_timezones_dupe = all_timezones[:]\n>>> all_timezones_dupe.sort()\n>>> all_timezones == all_timezones_dupe\nTrue\n\nall_timezones and common_timezones are also available as sets.\n>>> from pytz import all_timezones_set, common_timezones_set\n>>> 'US/Eastern' in all_timezones_set\nTrue\n>>> 'US/Eastern' in common_timezones_set\nTrue\n>>> 'Australia/Victoria' in common_timezones_set\nFalse\n\nYou can also retrieve lists of timezones used by particular countries\nusing the country_timezones() function. It requires an ISO-3166\ntwo letter country code.\n>>> from pytz import country_timezones\n>>> print(' '.join(country_timezones('ch')))\nEurope/Zurich\n>>> print(' '.join(country_timezones('CH')))\nEurope/Zurich\n\n\n\nInternationalization - i18n/l10n\nPytz is an interface to the IANA database, which uses ASCII names. The Unicode  Consortium\u2019s Unicode Locales (CLDR)\nproject provides translations. Python packages such as\nBabel\nand Thomas Khyn\u2019s l18n package can be used\nto access these translations from Python.\n\n\nLicense\nMIT license.\nThis code is also available as part of Zope 3 under the Zope Public\nLicense,  Version 2.1 (ZPL).\nI\u2019m happy to relicense this code if necessary for inclusion in other\nopen source projects.\n\n\nLatest Versions\nThis package will be updated after releases of the Olson timezone\ndatabase.  The latest version can be downloaded from the Python Package\nIndex.  The code that is used\nto generate this distribution is hosted on Github and available\nusing git:\ngit clone https://github.com/stub42/pytz.git\nAnnouncements of new releases are made on\nLaunchpad, and the\nAtom feed\nhosted there.\n\n\nBugs, Feature Requests & Patches\nBugs should be reported on Github.\nFeature requests are unlikely to be considered, and efforts instead directed\nto timezone support now built into Python or packages that work with it.\n\n\nSecurity Issues\nReports about security issues can be made via Tidelift.\n\n\nIssues & Limitations\n\nThis project is in maintenance mode. Projects using Python 3.9 or later\nare best served by using the timezone functionaly now included in core\nPython and packages that work with it such as tzdata.\nOffsets from UTC are rounded to the nearest whole minute, so timezones\nsuch as Europe/Amsterdam pre 1937 will be up to 30 seconds out. This\nwas a limitation of the Python datetime library.\nIf you think a timezone definition is incorrect, I probably can\u2019t fix\nit. pytz is a direct translation of the Olson timezone database, and\nchanges to the timezone definitions need to be made to this source.\nIf you find errors they should be reported to the time zone mailing\nlist, linked from http://www.iana.org/time-zones.\n\n\n\nFurther Reading\nMore info than you want to know about timezones:\nhttps://data.iana.org/time-zones/tz-link.html\n\n\nContact\nStuart Bishop <stuart@stuartbishop.net>\n\n", "description": "World timezone definitions and tools."}, {"name": "pyttsx3", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nOffline Text To Speech (TTS) converter for Python \nInstallation :\nLinux installation requirements :\nFeatures :\nUsage :\nFull documentation of the Library\nIncluded TTS engines:\nProject Links :\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\n\nOffline Text To Speech (TTS) converter for Python \n      \npyttsx3 is a text-to-speech conversion library in Python. Unlike alternative libraries, it works offline.\nBuy me a coffee \ud83d\ude07\nInstallation :\npip install pyttsx3\n\n\nIf you get installation errors , make sure you first upgrade your wheel version using :\npip install --upgrade wheel\n\nLinux installation requirements :\n\n\nIf you are on a linux system and if the voice output is not working , then  :\nInstall espeak , ffmpeg and libespeak1 as shown below:\n sudo apt update && sudo apt install espeak ffmpeg libespeak1\n\n\n\nFeatures :\n\n\u2728Fully OFFLINE text to speech conversion\n\ud83c\udf88 Choose among different voices installed in your system\n\ud83c\udf9b Control speed/rate of speech\n\ud83c\udf9a Tweak Volume\n\ud83d\udcc0 Save the speech audio as a file\n\u2764\ufe0f Simple, powerful, & intuitive API\n\nUsage :\nimport pyttsx3\nengine = pyttsx3.init()\nengine.say(\"I will speak this text\")\nengine.runAndWait()\nSingle line usage with speak function with default options\nimport pyttsx3\npyttsx3.speak(\"I will speak this text\")\nChanging Voice , Rate and Volume :\nimport pyttsx3\nengine = pyttsx3.init() # object creation\n\n\"\"\" RATE\"\"\"\nrate = engine.getProperty('rate')   # getting details of current speaking rate\nprint (rate)                        #printing current voice rate\nengine.setProperty('rate', 125)     # setting up new voice rate\n\n\n\"\"\"VOLUME\"\"\"\nvolume = engine.getProperty('volume')   #getting to know current volume level (min=0 and max=1)\nprint (volume)                          #printing current volume level\nengine.setProperty('volume',1.0)    # setting up volume level  between 0 and 1\n\n\"\"\"VOICE\"\"\"\nvoices = engine.getProperty('voices')       #getting details of current voice\n#engine.setProperty('voice', voices[0].id)  #changing index, changes voices. o for male\nengine.setProperty('voice', voices[1].id)   #changing index, changes voices. 1 for female\n\nengine.say(\"Hello World!\")\nengine.say('My current speaking rate is ' + str(rate))\nengine.runAndWait()\nengine.stop()\n\n\n\"\"\"Saving Voice to a file\"\"\"\n# On linux make sure that 'espeak' and 'ffmpeg' are installed\nengine.save_to_file('Hello World', 'test.mp3')\nengine.runAndWait()\nFull documentation of the Library\nhttps://pyttsx3.readthedocs.io/en/latest/\nIncluded TTS engines:\n\nsapi5\nnsss\nespeak\n\nFeel free to wrap another text-to-speech engine for use with pyttsx3.\nProject Links :\n\nPyPI (https://pypi.python.org)\nGitHub (https://github.com/nateshmbhat/pyttsx3)\nFull Documentation (https://pyttsx3.readthedocs.org)\n\n\n\n", "description": "Text-to-speech conversion library."}, {"name": "python-pptx", "readme": "\n\n\n\nREADME.rst\n\n\n\n\npython-pptx is a Python library for creating, reading, and updating PowerPoint (.pptx)\nfiles.\nA typical use would be generating a PowerPoint presentation from dynamic content such as\na database query, analytics output, or a JSON payload, perhaps in response to an HTTP\nrequest and downloading the generated PPTX file in response. It runs on any Python\ncapable platform, including macOS and Linux, and does not require the PowerPoint\napplication to be installed or licensed.\nIt can also be used to analyze PowerPoint files from a corpus, perhaps to extract search\nindexing text and images.\nIn can also be used to simply automate the production of a slide or two that would be\ntedious to get right by hand, which is how this all got started.\nMore information is available in the python-pptx documentation.\nBrowse examples with screenshots to get a quick idea what you can do with\npython-pptx.\n\n\n"}, {"name": "python-multipart", "readme": "\n\npython-multipart is an Apache2 licensed streaming multipart parser for Python.\nTest coverage is currently 100%.\nDocumentation is available here.\n\nWhy?\nBecause streaming uploads are awesome for large files.\n\n\nHow to Test\nIf you want to test:\n$ pip install .[dev]\n$ inv test\n\n"}, {"name": "python-dotenv", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npython-dotenv\nGetting Started\nOther Use Cases\nLoad configuration without altering the environment\nParse configuration as a stream\nLoad .env files in IPython\nCommand-line Interface\nFile format\nMultiline values\nVariable without a value\nVariable expansion\nRelated Projects\nAcknowledgements\n\n\n\n\n\nREADME.md\n\n\n\n\npython-dotenv\n\n\nPython-dotenv reads key-value pairs from a .env file and can set them as environment\nvariables. It helps in the development of applications following the\n12-factor principles.\n\nGetting Started\nOther Use Cases\n\nLoad configuration without altering the environment\nParse configuration as a stream\nLoad .env files in IPython\n\n\nCommand-line Interface\nFile format\n\nMultiline values\nVariable expansion\n\n\nRelated Projects\nAcknowledgements\n\nGetting Started\npip install python-dotenv\nIf your application takes its configuration from environment variables, like a 12-factor\napplication, launching it in development is not very practical because you have to set\nthose environment variables yourself.\nTo help you with that, you can add Python-dotenv to your application to make it load the\nconfiguration from a .env file when it is present (e.g. in development) while remaining\nconfigurable via the environment:\nfrom dotenv import load_dotenv\n\nload_dotenv()  # take environment variables from .env.\n\n# Code of your application, which uses environment variables (e.g. from `os.environ` or\n# `os.getenv`) as if they came from the actual environment.\nBy default, load_dotenv doesn't override existing environment variables.\nTo configure the development environment, add a .env in the root directory of your\nproject:\n.\n\u251c\u2500\u2500 .env\n\u2514\u2500\u2500 foo.py\n\nThe syntax of .env files supported by python-dotenv is similar to that of Bash:\n# Development settings\nDOMAIN=example.org\nADMIN_EMAIL=admin@${DOMAIN}\nROOT_URL=${DOMAIN}/app\nIf you use variables in values, ensure they are surrounded with { and }, like\n${DOMAIN}, as bare variables such as $DOMAIN are not expanded.\nYou will probably want to add .env to your .gitignore, especially if it contains\nsecrets like a password.\nSee the section \"File format\" below for more information about what you can write in a\n.env file.\nOther Use Cases\nLoad configuration without altering the environment\nThe function dotenv_values works more or less the same way as load_dotenv, except it\ndoesn't touch the environment, it just returns a dict with the values parsed from the\n.env file.\nfrom dotenv import dotenv_values\n\nconfig = dotenv_values(\".env\")  # config = {\"USER\": \"foo\", \"EMAIL\": \"foo@example.org\"}\nThis notably enables advanced configuration management:\nimport os\nfrom dotenv import dotenv_values\n\nconfig = {\n    **dotenv_values(\".env.shared\"),  # load shared development variables\n    **dotenv_values(\".env.secret\"),  # load sensitive variables\n    **os.environ,  # override loaded values with environment variables\n}\nParse configuration as a stream\nload_dotenv and dotenv_values accept streams via their stream\nargument.  It is thus possible to load the variables from sources other than the\nfilesystem (e.g. the network).\nfrom io import StringIO\n\nfrom dotenv import load_dotenv\n\nconfig = StringIO(\"USER=foo\\nEMAIL=foo@example.org\")\nload_dotenv(stream=config)\nLoad .env files in IPython\nYou can use dotenv in IPython.  By default, it will use find_dotenv to search for a\n.env file:\n%load_ext dotenv\n%dotenv\nYou can also specify a path:\n%dotenv relative/or/absolute/path/to/.env\nOptional flags:\n\n-o to override existing variables.\n-v for increased verbosity.\n\nCommand-line Interface\nA CLI interface dotenv is also included, which helps you manipulate the .env file\nwithout manually opening it.\n$ pip install \"python-dotenv[cli]\"\n$ dotenv set USER foo\n$ dotenv set EMAIL foo@example.org\n$ dotenv list\nUSER=foo\nEMAIL=foo@example.org\n$ dotenv list --format=json\n{\n  \"USER\": \"foo\",\n  \"EMAIL\": \"foo@example.org\"\n}\n$ dotenv run -- python foo.py\nRun dotenv --help for more information about the options and subcommands.\nFile format\nThe format is not formally specified and still improves over time.  That being said,\n.env files should mostly look like Bash files.\nKeys can be unquoted or single-quoted. Values can be unquoted, single- or double-quoted.\nSpaces before and after keys, equal signs, and values are ignored. Values can be followed\nby a comment.  Lines can start with the export directive, which does not affect their\ninterpretation.\nAllowed escape sequences:\n\nin single-quoted values: \\\\, \\'\nin double-quoted values: \\\\, \\', \\\", \\a, \\b, \\f, \\n, \\r, \\t, \\v\n\nMultiline values\nIt is possible for single- or double-quoted values to span multiple lines.  The following\nexamples are equivalent:\nFOO=\"first line\nsecond line\"\nFOO=\"first line\\nsecond line\"\nVariable without a value\nA variable can have no value:\nFOO\nIt results in dotenv_values associating that variable name with the value None (e.g.\n{\"FOO\": None}. load_dotenv, on the other hand, simply ignores such variables.\nThis shouldn't be confused with FOO=, in which case the variable is associated with the\nempty string.\nVariable expansion\nPython-dotenv can interpolate variables using POSIX variable expansion.\nWith load_dotenv(override=True) or dotenv_values(), the value of a variable is the\nfirst of the values defined in the following list:\n\nValue of that variable in the .env file.\nValue of that variable in the environment.\nDefault value, if provided.\nEmpty string.\n\nWith load_dotenv(override=False), the value of a variable is the first of the values\ndefined in the following list:\n\nValue of that variable in the environment.\nValue of that variable in the .env file.\nDefault value, if provided.\nEmpty string.\n\nRelated Projects\n\nHoncho - For managing\nProcfile-based applications.\ndjango-dotenv\ndjango-environ\ndjango-environ-2\ndjango-configuration\ndump-env\nenvirons\ndynaconf\nparse_it\npython-decouple\n\nAcknowledgements\nThis project is currently maintained by Saurabh Kumar and\nBertrand Bonnefoy-Claudet and would not have been possible\nwithout the support of these awesome\npeople.\n\n\n"}, {"name": "python-docx", "readme": "\n\n\n\nREADME.rst\n\n\n\n\n\npython-docx is a Python library for creating and updating Microsoft Word\n(.docx) files.\nMore information is available in the python-docx documentation.\n\n\n"}, {"name": "python-dateutil", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ndateutil - powerful extensions to datetime\nInstallation\nDownload\nCode\nFeatures\nQuick example\nContributing\nAuthor\nContact\nLicense\n\n\n\n\n\nREADME.rst\n\n\n\n\n\ndateutil - powerful extensions to datetime\n  \n\n \n   \nThe dateutil module provides powerful extensions to\nthe standard datetime module, available in Python.\n\nInstallation\ndateutil can be installed from PyPI using pip (note that the package name is\ndifferent from the importable name):\npip install python-dateutil\n\n\nDownload\ndateutil is available on PyPI\nhttps://pypi.org/project/python-dateutil/\nThe documentation is hosted at:\nhttps://dateutil.readthedocs.io/en/stable/\n\nCode\nThe code and issue tracker are hosted on GitHub:\nhttps://github.com/dateutil/dateutil/\n\nFeatures\n\nComputing of relative deltas (next month, next year,\nnext Monday, last week of month, etc);\nComputing of relative deltas between two given\ndate and/or datetime objects;\nComputing of dates based on very flexible recurrence rules,\nusing a superset of the iCalendar\nspecification. Parsing of RFC strings is supported as well.\nGeneric parsing of dates in almost any string format;\nTimezone (tzinfo) implementations for tzfile(5) format\nfiles (/etc/localtime, /usr/share/zoneinfo, etc), TZ\nenvironment string (in all known formats), iCalendar\nformat files, given ranges (with help from relative deltas),\nlocal machine timezone, fixed offset timezone, UTC timezone,\nand Windows registry-based time zones.\nInternal up-to-date world timezone information based on\nOlson's database.\nComputing of Easter Sunday dates for any given year,\nusing Western, Orthodox or Julian algorithms;\nA comprehensive test suite.\n\n\nQuick example\nHere's a snapshot, just to give an idea about the power of the\npackage. For more examples, look at the documentation.\nSuppose you want to know how much time is left, in\nyears/months/days/etc, before the next easter happening on a\nyear with a Friday 13th in August, and you want to get today's\ndate out of the \"date\" unix system command. Here is the code:\n>>> from dateutil.relativedelta import *\n>>> from dateutil.easter import *\n>>> from dateutil.rrule import *\n>>> from dateutil.parser import *\n>>> from datetime import *\n>>> now = parse(\"Sat Oct 11 17:13:46 UTC 2003\")\n>>> today = now.date()\n>>> year = rrule(YEARLY,dtstart=now,bymonth=8,bymonthday=13,byweekday=FR)[0].year\n>>> rdelta = relativedelta(easter(year), today)\n>>> print(\"Today is: %s\" % today)\nToday is: 2003-10-11\n>>> print(\"Year with next Aug 13th on a Friday is: %s\" % year)\nYear with next Aug 13th on a Friday is: 2004\n>>> print(\"How far is the Easter of that year: %s\" % rdelta)\nHow far is the Easter of that year: relativedelta(months=+6)\n>>> print(\"And the Easter of that year is: %s\" % (today+rdelta))\nAnd the Easter of that year is: 2004-04-11\nBeing exactly 6 months ahead was really a coincidence :)\n\nContributing\nWe welcome many types of contributions - bug reports, pull requests (code, infrastructure or documentation fixes). For more information about how to contribute to the project, see the CONTRIBUTING.md file in the repository.\n\nAuthor\nThe dateutil module was written by Gustavo Niemeyer <gustavo@niemeyer.net>\nin 2003.\nIt is maintained by:\n\nGustavo Niemeyer <gustavo@niemeyer.net> 2003-2011\nTomi Pievil\u00e4inen <tomi.pievilainen@iki.fi> 2012-2014\nYaron de Leeuw <me@jarondl.net> 2014-2016\nPaul Ganssle <paul@ganssle.io> 2015-\n\nStarting with version 2.4.1 and running until 2.8.2, all source and binary\ndistributions will be signed by a PGP key that has, at the very least, been\nsigned by the key which made the previous release. A table of release signing\nkeys can be found below:\n\n\nReleases\nSigning key fingerprint\n\n\n\n2.4.1-2.8.2\n6B49 ACBA DCF6 BD1C A206 67AB CD54 FCE3 D964 BEFB\n\n\n\nNew releases may have signed tags, but binary and source distributions\nuploaded to PyPI will no longer have GPG signatures attached.\n\nContact\nOur mailing list is available at dateutil@python.org. As it is hosted by the PSF, it is subject to the PSF code of\nconduct.\n\nLicense\nAll contributions after December 1, 2017 released under dual license - either Apache 2.0 License or the BSD 3-Clause License. Contributions before December 1, 2017 - except those those explicitly relicensed - are released only under the BSD 3-Clause License.\n\n\n"}, {"name": "pyth3", "readme": "\n\n\n\n\n\n\n\n\n\n\n\npyth3 - Python text markup and conversion\nDesign principles/goals\nExamples\nPython 3 migration\nLimitations\nTests\nDependencies\n\n\n\n\n\nREADME.md\n\n\n\n\npyth3 - Python text markup and conversion\nPyth is intended to make it easy to convert marked-up text between different common formats.\nThis is a (rather incomplete so far) port of pyth 0.6.0 to Python 3.\nMarked-up text means text which has:\n\nParagraphs\nHeadings\nBold, italic, and underlined text\nHyperlinks\nBullet lists\nSimple tables\nVery little else\n\nFormats that have (very varying) degrees of support are\n\nPlain text\nXHTML\nRTF (Rich Text Format)\nPDF (output only)\n\nDesign principles/goals\n\nIgnore unsupported information in input formats (e.g. page layout)\nIgnore font issues -- output in a single font.\nIgnore specific text sizes, but maintain italics, boldface, subscript/superscript\nHave no dependencies unless they are written in Python, and work\nMake it easy to add support for new formats, by using an architecture based on plugins and adapters.\n\nExamples\nSee directory examples.\nPython 3 migration\nThe code was originally written for Python 2.\nIt has been partially(!) upgraded to Python 3 compatibility (starting via 'modernize').\nThis does not mean it will actually work!\npyth.plugins.rtf15.reader has been debugged and now appears to work correctly.\npyth.plugins.xhtml.writer has been debugged and now appears to work correctly.\npyth.plugins.plaintext.writer has been debugged and now appears to work correctly.\nEverything else is unknown (or definitely broken on Python 3: even many\nof the tests fail)\nSee directory py3migration for a bit more detail.\n(If you find something is broken on Python 2 that worked before, please\neither fix it or simply stick to pyth version 0.6.0.)\nLimitations\npyth.plugins.rtf15.reader:\n\nbulleted or enumerated items will be returned\nas plain paragraphs (no indentation, no bullets).\ncannot cope with Symbol font correctly:\n\nfrom MS Word: lower-coderange characters (greek mostly) work\nfrom MS Word: higher-coderange characters are missing, because\nWord encodes them in a horribly complicated manner not supported\nby pyth currently\nfrom Wordpad: lower- and higher-coderange characters come out in\nthe wrong encoding (ANSI, I think)\n\n\n\npyth.plugins.xhtml.writer:\n\nvery limited functionality\n\npyth.plugins.plaintext.writer:\n\nvery very limited functionality\n\nOthers:\n\nwill not work on Python 3 without some porting love-and-care\n\nTests\nDon't try to run them all, it's frustrating.\npy.test -v test_readrtf15.py is a good way to run the least frustrating\nsubset of them.\nIt is normal that most others will fail on Python 3.\ntest_readrtf15.py generates test cases dynamically based on\nexisting input files in tests/rtfs and\nexisting reference output files in tests/rtf-as-html and tests/rtf-as-html.\nThe empty or missing output files indicate where functionality is missing,\nwhich nicely indicates possible places to jump in if you want to help.\nDependencies\nOnly the most important two of the dependencies,\nare actually declared in setup.py, because the others are large, yet\nare required only in pyth components not yet ported to Python 3.\nThey are:\n\nreportlab for PDFWriter\ndocutils for LatexWriter\n\n\n\n", "description": "Python library for text markup and conversion between formats."}, {"name": "pytest", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFeatures\nDocumentation\nBugs/Requests\nChangelog\nSupport pytest\npytest for enterprise\nSecurity\nLicense\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThe pytest framework makes it easy to write small tests, yet\nscales to support complex functional testing for applications and libraries.\nAn example of a simple test:\n# content of test_sample.py\ndef inc(x):\n    return x + 1\n\n\ndef test_answer():\n    assert inc(3) == 5\nTo execute it:\n$ pytest\n============================= test session starts =============================\ncollected 1 items\n\ntest_sample.py F\n\n================================== FAILURES ===================================\n_________________________________ test_answer _________________________________\n\n    def test_answer():\n>       assert inc(3) == 5\nE       assert 4 == 5\nE        +  where 4 = inc(3)\n\ntest_sample.py:5: AssertionError\n========================== 1 failed in 0.04 seconds ===========================\n\nDue to pytest's detailed assertion introspection, only plain assert statements are used. See getting-started for more examples.\n\nFeatures\n\nDetailed info on failing assert statements (no need to remember self.assert* names)\nAuto-discovery\nof test modules and functions\nModular fixtures for\nmanaging small or parametrized long-lived test resources\nCan run unittest (or trial),\nnose test suites out of the box\nPython 3.8+ or PyPy3\nRich plugin architecture, with over 850+ external plugins and thriving community\n\n\nDocumentation\nFor full documentation, including installation, tutorials and PDF documents, please see https://docs.pytest.org/en/stable/.\n\nBugs/Requests\nPlease use the GitHub issue tracker to submit bugs or request features.\n\nChangelog\nConsult the Changelog page for fixes and enhancements of each version.\n\nSupport pytest\nOpen Collective is an online funding platform for open and transparent communities.\nIt provides tools to raise money and share your finances in full transparency.\nIt is the platform of choice for individuals and companies that want to make one-time or\nmonthly donations directly to the project.\nSee more details in the pytest collective.\n\npytest for enterprise\nAvailable as part of the Tidelift Subscription.\nThe maintainers of pytest and thousands of other packages are working with Tidelift to deliver commercial support and\nmaintenance for the open source dependencies you use to build your applications.\nSave time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use.\nLearn more.\n\nSecurity\npytest has never been associated with a security vulnerability, but in any case, to report a\nsecurity vulnerability please use the Tidelift security contact.\nTidelift will coordinate the fix and disclosure.\n\nLicense\nCopyright Holger Krekel and others, 2004.\nDistributed under the terms of the MIT license, pytest is free and open source software.\n\n\n", "description": "Testing framework for Python.", "category": "Testing"}, {"name": "pytesseract", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nPython Tesseract\nUSAGE\nINSTALLATION\nTESTING\nLICENSE\nCONTRIBUTORS\n\n\n\n\n\nREADME.rst\n\n\n\n\nPython Tesseract\n\n\n\n\n\n\n\n\n\nPython-tesseract is an optical character recognition (OCR) tool for python.\nThat is, it will recognize and \"read\" the text embedded in images.\nPython-tesseract is a wrapper for Google's Tesseract-OCR Engine.\nIt is also useful as a stand-alone invocation script to tesseract, as it can read all image types\nsupported by the Pillow and Leptonica imaging libraries, including jpeg, png, gif, bmp, tiff,\nand others. Additionally, if used as a script, Python-tesseract will print the recognized\ntext instead of writing it to a file.\n\nUSAGE\nQuickstart\nNote: Test images are located in the tests/data folder of the Git repo.\nLibrary usage:\nfrom PIL import Image\n\nimport pytesseract\n\n# If you don't have tesseract executable in your PATH, include the following:\npytesseract.pytesseract.tesseract_cmd = r'<full_path_to_your_tesseract_executable>'\n# Example tesseract_cmd = r'C:\\Program Files (x86)\\Tesseract-OCR\\tesseract'\n\n# Simple image to string\nprint(pytesseract.image_to_string(Image.open('test.png')))\n\n# In order to bypass the image conversions of pytesseract, just use relative or absolute image path\n# NOTE: In this case you should provide tesseract supported images or tesseract will return error\nprint(pytesseract.image_to_string('test.png'))\n\n# List of available languages\nprint(pytesseract.get_languages(config=''))\n\n# French text image to string\nprint(pytesseract.image_to_string(Image.open('test-european.jpg'), lang='fra'))\n\n# Batch processing with a single file containing the list of multiple image file paths\nprint(pytesseract.image_to_string('images.txt'))\n\n# Timeout/terminate the tesseract job after a period of time\ntry:\n    print(pytesseract.image_to_string('test.jpg', timeout=2)) # Timeout after 2 seconds\n    print(pytesseract.image_to_string('test.jpg', timeout=0.5)) # Timeout after half a second\nexcept RuntimeError as timeout_error:\n    # Tesseract processing is terminated\n    pass\n\n# Get bounding box estimates\nprint(pytesseract.image_to_boxes(Image.open('test.png')))\n\n# Get verbose data including boxes, confidences, line and page numbers\nprint(pytesseract.image_to_data(Image.open('test.png')))\n\n# Get information about orientation and script detection\nprint(pytesseract.image_to_osd(Image.open('test.png')))\n\n# Get a searchable PDF\npdf = pytesseract.image_to_pdf_or_hocr('test.png', extension='pdf')\nwith open('test.pdf', 'w+b') as f:\n    f.write(pdf) # pdf type is bytes by default\n\n# Get HOCR output\nhocr = pytesseract.image_to_pdf_or_hocr('test.png', extension='hocr')\n\n# Get ALTO XML output\nxml = pytesseract.image_to_alto_xml('test.png')\nSupport for OpenCV image/NumPy array objects\nimport cv2\n\nimg_cv = cv2.imread(r'/<path_to_image>/digits.png')\n\n# By default OpenCV stores images in BGR format and since pytesseract assumes RGB format,\n# we need to convert from BGR to RGB format/mode:\nimg_rgb = cv2.cvtColor(img_cv, cv2.COLOR_BGR2RGB)\nprint(pytesseract.image_to_string(img_rgb))\n# OR\nimg_rgb = Image.frombytes('RGB', img_cv.shape[:2], img_cv, 'raw', 'BGR', 0, 0)\nprint(pytesseract.image_to_string(img_rgb))\nIf you need custom configuration like oem/psm, use the config keyword.\n# Example of adding any additional options\ncustom_oem_psm_config = r'--oem 3 --psm 6'\npytesseract.image_to_string(image, config=custom_oem_psm_config)\n\n# Example of using pre-defined tesseract config file with options\ncfg_filename = 'words'\npytesseract.run_and_get_output(image, extension='txt', config=cfg_filename)\nAdd the following config, if you have tessdata error like: \"Error opening data file...\"\n# Example config: r'--tessdata-dir \"C:\\Program Files (x86)\\Tesseract-OCR\\tessdata\"'\n# It's important to add double quotes around the dir path.\ntessdata_dir_config = r'--tessdata-dir \"<replace_with_your_tessdata_dir_path>\"'\npytesseract.image_to_string(image, lang='chi_sim', config=tessdata_dir_config)\nFunctions\n\nget_languages Returns all currently supported languages by Tesseract OCR.\nget_tesseract_version Returns the Tesseract version installed in the system.\nimage_to_string Returns unmodified output as string from Tesseract OCR processing\nimage_to_boxes Returns result containing recognized characters and their box boundaries\nimage_to_data Returns result containing box boundaries, confidences, and other information. Requires Tesseract 3.05+. For more information, please check the Tesseract TSV documentation\nimage_to_osd Returns result containing information about orientation and script detection.\nimage_to_alto_xml Returns result in the form of Tesseract's ALTO XML format.\nrun_and_get_output Returns the raw output from Tesseract OCR. Gives a bit more control over the parameters that are sent to tesseract.\n\nParameters\nimage_to_data(image, lang=None, config='', nice=0, output_type=Output.STRING, timeout=0, pandas_config=None)\n\nimage Object or String - either PIL Image, NumPy array or file path of the image to be processed by Tesseract. If you pass object instead of file path, pytesseract will implicitly convert the image to RGB mode.\nlang String - Tesseract language code string. Defaults to eng if not specified! Example for multiple languages: lang='eng+fra'\nconfig String - Any additional custom configuration flags that are not available via the pytesseract function. For example: config='--psm 6'\nnice Integer - modifies the processor priority for the Tesseract run. Not supported on Windows. Nice adjusts the niceness of unix-like processes.\noutput_type Class attribute - specifies the type of the output, defaults to string.  For the full list of all supported types, please check the definition of pytesseract.Output class.\ntimeout Integer or Float - duration in seconds for the OCR processing, after which, pytesseract will terminate and raise RuntimeError.\npandas_config Dict - only for the Output.DATAFRAME type. Dictionary with custom arguments for pandas.read_csv. Allows you to customize the output of image_to_data.\n\nCLI usage:\npytesseract [-l lang] image_file\n\nINSTALLATION\nPrerequisites:\n\nPython-tesseract requires Python 3.6+\n\nYou will need the Python Imaging Library (PIL) (or the Pillow fork).\nPlease check the Pillow documentation to know the basic Pillow installation.\n\nInstall Google Tesseract OCR\n(additional info how to install the engine on Linux, Mac OSX and Windows).\nYou must be able to invoke the tesseract command as tesseract. If this\nisn't the case, for example because tesseract isn't in your PATH, you will\nhave to change the \"tesseract_cmd\" variable pytesseract.pytesseract.tesseract_cmd.\nUnder Debian/Ubuntu you can use the package tesseract-ocr.\nFor Mac OS users. please install homebrew package tesseract.\nNote: In some rare cases, you might need to additionally install tessconfigs and configs from tesseract-ocr/tessconfigs if the OS specific package doesn't include them.\n\n\n\nInstalling via pip:\n\nCheck the pytesseract package page for more information.\npip install pytesseract\n\nOr if you have git installed:\n\npip install -U git+https://github.com/madmaze/pytesseract.git\n\nInstalling from source:\n\ngit clone https://github.com/madmaze/pytesseract.git\ncd pytesseract && pip install -U .\n\nInstall with conda (via conda-forge):\n\nconda install -c conda-forge pytesseract\n\nTESTING\nTo run this project's test suite, install and run tox. Ensure that you have tesseract\ninstalled and in your PATH.\npip install tox\ntox\n\nLICENSE\nCheck the LICENSE file included in the Python-tesseract repository/distribution.\nAs of Python-tesseract 0.3.1 the license is Apache License Version 2.0\n\nCONTRIBUTORS\n\nOriginally written by Samuel Hoffstaetter\nJuarez Bochi\nMatthias Lee\nLars Kistner\nRyan Mitchell\nEmilio Cecchini\nJohn Hagen\nDarius Morawiec\nEddie Bedada\nU\u011furcan Aky\u00fcz\n\n\n\n", "description": "Optical character recognition (OCR) tool for python using Tesseract-OCR."}, {"name": "pyswisseph", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nPyswisseph\nUsage Example\nLinks\nSource code\nLicensing\nTest Suite\nCredits\n\n\n\n\n\nREADME.rst\n\n\n\n\nPyswisseph\nThis is the Python extension to the Swiss Ephemeris by AstroDienst.\nThe Swiss Ephemeris is the de-facto standard library for astrological\ncalculations. It is a high-precision ephemeris, based upon the DE431\nephemerides from NASA's JPL, and covering the time range 13201 BC to AD 17191.\n\nUsage Example\n>>> import swisseph as swe\n>>> # first set path to ephemeris files\n>>> swe.set_ephe_path('/usr/share/sweph/ephe')\n>>> # find time of next lunar eclipse\n>>> jd = swe.julday(2007, 3, 3) # julian day\n>>> res = swe.lun_eclipse_when(jd)\n>>> ecltime = swe.revjul(res[1][0])\n>>> print(ecltime)\n(2007, 3, 3, 23.347926892340183)\n>>> # get ecliptic position of asteroid 13681 \"Monty Python\"\n>>> jd = swe.julday(2008, 3, 21)\n>>> xx, rflags = swe.calc_ut(jd, swe.AST_OFFSET+13681)\n>>> # print longitude\n>>> print(xx[0])\n0.09843983166646618\n\n\nLinks\n\n\nPyswisseph docs:https://astrorigin.com/pyswisseph\n\nPython Package Index:https://pypi.org/project/pyswisseph\n\nAstroDienst:https://www.astro.com/swisseph\n\n\n\n\nSource code\nClone the Github repository with command:\ngit clone --recurse-submodules https://github.com/astrorigin/pyswisseph\n\nLicensing\nThe Pyswisseph package adopts the GNU Affero General Public License version 3.\nSee the LICENSE.txt file.\nThe original swisseph library is distributed under a dual licensing system:\nGNU Affero General Public License, or Swiss Ephemeris Professional License.\nFor more information, see file libswe/LICENSE.\n\nTest Suite\nFor now, the tests can be run with the standard python3 setup.py test\ncommand. For them to pass successfully, you need a basic set of ephemerides\nfiles installed somewhere on your system:\n\nseas_18.se1\nsefstars.txt\nsemo_18.se1\nsepl_18.se1\n\nAll downloadable from https://www.astro.com/ftp/swisseph/ephe/\nThe path to the directory containing those files must be indicated in the\nenvironment variable SE_EPHE_PATH.\nFor example, on a system with the env command, you can do:\nenv SE_EPHE_PATH=\"/usr/share/sweph/ephe\" python3 setup.py test\n\n\nCredits\nAuthor: Stanislas Marquis <stan(at)astrorigin.com>\nPyPI/CI: Jonathan de Jong <jonathan(at)automatia.nl>\n\n\n", "description": "Python bindings for Swiss Ephemeris astrological calculations library."}, {"name": "pyshp", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPyShp\nContents\nOverview\nVersion Changes\n2.3.1\nBug fixes:\n2.3.0\nNew Features:\nImprovements:\nBug fixes:\n2.2.0\nNew Features:\nImprovements:\nBug fixes:\n2.1.3\nBug fixes:\n2.1.2\nBug fixes:\n2.1.1\nImprovements:\nBug fixes:\n2.1.0\nNew Features:\nBug fixes:\n2.0.0\nMajor Changes:\nImportant Fixes:\nThe Basics\nReading Shapefiles\nThe Reader Class\nReading Shapefiles from Local Files\nReading Shapefiles from Zip Files\nReading Shapefiles from URLs\nReading Shapefiles from File-Like Objects\nReading Shapefiles Using the Context Manager\nReading Shapefile Meta-Data\nReading Geometry\nReading Records\nReading Geometry and Records Simultaneously\nWriting Shapefiles\nThe Writer Class\nWriting Shapefiles to Local Files\nWriting Shapefiles to File-Like Objects\nWriting Shapefiles Using the Context Manager\nSetting the Shape Type\nAdding Records\nAdding Geometry\nGeometry and Record Balancing\nWriting .prj files\nAdvanced Use\nCommon Errors and Fixes\nWarnings and Logging\nShapefile Encoding Errors\nReading Large Shapefiles\nIterating through a shapefile\nLimiting which fields to read\nAttribute filtering\nSpatial filtering\nWriting large shapefiles\nMerging multiple shapefiles\nEditing shapefiles\n3D and Other Geometry Types\nShapefiles with measurement (M) values\nShapefiles with elevation (Z) values\n3D MultiPatch Shapefiles\nTesting\nContributors\n\n\n\n\n\nREADME.md\n\n\n\n\nPyShp\nThe Python Shapefile Library (PyShp) reads and writes ESRI Shapefiles in pure Python.\n\n\n\nAuthor: Joel Lawhead\nMaintainers: Karim Bahgat\nVersion: 2.3.1\nDate: 28 July, 2022\nLicense: MIT\n\nContents\n\nOverview\nVersion Changes\nThe Basics\n\nReading Shapefiles\n\nThe Reader Class\n\nReading Shapefiles from Local Files\nReading Shapefiles from Zip Files\nReading Shapefiles from URLs\nReading Shapefiles from File-Like Objects\nReading Shapefiles Using the Context Manager\nReading Shapefile Meta-Data\n\n\nReading Geometry\nReading Records\nReading Geometry and Records Simultaneously\n\n\nWriting Shapefiles\n\nThe Writer Class\n\nWriting Shapefiles to Local Files\nWriting Shapefiles to File-Like Objects\nWriting Shapefiles Using the Context Manager\nSetting the Shape Type\n\n\nAdding Records\nAdding Geometry\nGeometry and Record Balancing\n\n\n\n\nAdvanced Use\n\nCommon Errors and Fixes\n\nWarnings and Logging\nShapefile Encoding Errors\n\n\nReading Large Shapefiles\n\nIterating through a shapefile\nLimiting which fields to read\nAttribute filtering\nSpatial filtering\n\n\nWriting large shapefiles\n\nMerging multiple shapefiles\nEditing shapefiles\n\n\n3D and Other Geometry Types\n\nShapefiles with measurement (M) values\nShapefiles with elevation (Z) values\n3D MultiPatch Shapefiles\n\n\n\n\nTesting\nContributors\n\nOverview\nThe Python Shapefile Library (PyShp) provides read and write support for the\nEsri Shapefile format. The Shapefile format is a popular Geographic\nInformation System vector data format created by Esri. For more information\nabout this format please read the well-written \"ESRI Shapefile Technical\nDescription - July 1998\" located at http://www.esri.com/library/whitepapers/p\ndfs/shapefile.pdf\n. The Esri document describes the shp and shx file formats. However a third\nfile format called dbf is also required. This format is documented on the web\nas the \"XBase File Format Description\" and is a simple file-based database\nformat created in the 1960's. For more on this specification see: http://www.clicketyclick.dk/databases/xbase/format/index.html\nBoth the Esri and XBase file-formats are very simple in design and memory\nefficient which is part of the reason the shapefile format remains popular\ndespite the numerous ways to store and exchange GIS data available today.\nPyshp is compatible with Python 2.7-3.x.\nThis document provides examples for using PyShp to read and write shapefiles. However\nmany more examples are continually added to the blog http://GeospatialPython.com,\nand by searching for PyShp on https://gis.stackexchange.com.\nCurrently the sample census blockgroup shapefile referenced in the examples is available on the GitHub project site at\nhttps://github.com/GeospatialPython/pyshp. These\nexamples are straight-forward and you can also easily run them against your\nown shapefiles with minimal modification.\nImportant: If you are new to GIS you should read about map projections.\nPlease visit: https://github.com/GeospatialPython/pyshp/wiki/Map-Projections\nI sincerely hope this library eliminates the mundane distraction of simply\nreading and writing data, and allows you to focus on the challenging and FUN\npart of your geospatial project.\nVersion Changes\n2.3.1\nBug fixes:\n\nFix recently introduced issue where Reader/Writer closes file-like objects provided by user (#244)\n\n2.3.0\nNew Features:\n\nAdded support for pathlib and path-like shapefile filepaths (@mwtoews).\nAllow reading individual file extensions via filepaths.\n\nImprovements:\n\nSimplified setup and deployment (@mwtoews)\nFaster shape access when missing shx file\nSwitch to named logger (see #240)\n\nBug fixes:\n\nMore robust handling of corrupt shapefiles (fixes #235)\nFix errors when writing to individual file-handles (fixes #237)\nRevert previous decision to enforce geojson output ring orientation (detailed explanation at SciTools/cartopy#2012)\nFix test issues in environments without network access (@sebastic, @musicinmybrain).\n\n2.2.0\nNew Features:\n\nRead shapefiles directly from zipfiles.\nRead shapefiles directly from urls.\nAllow fast extraction of only a subset of dbf fields through a fields arg.\nAllow fast filtering which shapes to read from the file through a bbox arg.\n\nImprovements:\n\nMore examples and restructuring of README.\nMore informative Shape to geojson warnings (see #219).\nAdd shapefile.VERBOSE flag to control warnings verbosity (default True).\nShape object information when calling repr().\nFaster ring orientation checks, enforce geojson output ring orientation.\n\nBug fixes:\n\nRemove null-padding at end of some record character fields.\nFix dbf writing error when the number of record list or dict entries didn't match the number of fields.\nHandle rare garbage collection issue after deepcopy (mattijn/topojson#120)\nFix bug where records and shapes would be assigned incorrect record number (@karanrn)\nFix typos in docs (@timgates)\n\n2.1.3\nBug fixes:\n\nFix recent bug in geojson hole-in-polygon checking (see #205)\nMisc fixes to allow geo interface dump to json (eg dates as strings)\nHandle additional dbf date null values, and return faulty dates as unicode (see #187)\nAdd writer target typecheck\nFix bugs to allow reading shp/shx/dbf separately\nAllow delayed shapefile loading by passing no args\nFix error with writing empty z/m shapefile (@mcuprjak)\nFix signed_area() so ignores z/m coords\nEnforce writing the 11th field name character as null-terminator (only first 10 are used)\nMinor README fixes\nAdded more tests\n\n2.1.2\nBug fixes:\n\nFix issue where warnings.simplefilter('always') changes global warning behavior [see #203]\n\n2.1.1\nImprovements:\n\nHandle shapes with no coords and represent as geojson with no coords (GeoJSON null-equivalent)\nExpand testing to Python 3.6, 3.7, 3.8 and PyPy; drop 3.3 and 3.4 [@mwtoews]\nAdded pytest testing [@jmoujaes]\n\nBug fixes:\n\nFix incorrect geo interface handling of multipolygons with complex exterior-hole relations [see #202]\nEnforce shapefile requirement of at least one field, to avoid writing invalid shapefiles [@Jonty]\nFix Reader geo interface including DeletionFlag field in feature properties [@nnseva]\nFix polygons not being auto closed, which was accidentally dropped\nFix error for null geometries in feature geojson\nMisc docstring cleanup [@fiveham]\n\n2.1.0\nNew Features:\n\nAdded back read/write support for unicode field names.\nImproved Record representation\nMore support for geojson on Reader, ShapeRecord, ShapeRecords, and shapes()\n\nBug fixes:\n\nFixed error when reading optional m-values\nFixed Record attribute autocomplete in Python 3\nMisc readme cleanup\n\n2.0.0\nThe newest version of PyShp, version 2.0 introduced some major new improvements.\nA great thanks to all who have contributed code and raised issues, and for everyone's\npatience and understanding during the transition period.\nSome of the new changes are incompatible with previous versions.\nUsers of the previous version 1.x should therefore take note of the following changes\n(Note: Some contributor attributions may be missing):\nMajor Changes:\n\nFull support for unicode text, with custom encoding, and exception handling.\n\nMeans that the Reader returns unicode, and the Writer accepts unicode.\n\n\nPyShp has been simplified to a pure input-output library using the Reader and Writer classes, dropping the Editor class.\nSwitched to a new streaming approach when writing files, keeping memory-usage at a minimum:\n\nSpecify filepath/destination and text encoding when creating the Writer.\nThe file is written incrementally with each call to shape/record.\nAdding shapes is now done using dedicated methods for each shapetype.\n\n\nReading shapefiles is now more convenient:\n\nShapefiles can be opened using the context manager, and files are properly closed.\nShapefiles can be iterated, have a length, and supports the geo interface.\nNew ways of inspecting shapefile metadata by printing. [@megies]\nMore convenient accessing of Record values as attributes. [@philippkraft]\nMore convenient shape type name checking. [@megies]\n\n\nAdd more support and documentation for MultiPatch 3D shapes.\nThe Reader \"elevation\" and \"measure\" attributes now renamed \"zbox\" and \"mbox\", to make it clear they refer to the min/max values.\nBetter documentation of previously unclear aspects, such as field types.\n\nImportant Fixes:\n\nMore reliable/robust:\n\nFixed shapefile bbox error for empty or point type shapefiles. [@mcuprjak]\nReading and writing Z and M type shapes is now more robust, fixing many errors, and has been added to the documentation. [@ShinNoNoir]\nImproved parsing of field value types, fixed errors and made more flexible.\nFixed bug when writing shapefiles with datefield and date values earlier than 1900 [@megies]\n\n\nFix some geo interface errors, including checking polygon directions.\nBug fixes for reading from case sensitive file names, individual files separately, and from file-like objects. [@gastoneb, @kb003308, @erickskb]\nEnforce maximum field limit. [@mwtoews]\n\nThe Basics\nBefore doing anything you must import the library.\n>>> import shapefile\n\nThe examples below will use a shapefile created from the U.S. Census Bureau\nBlockgroups data set near San Francisco, CA and available in the git\nrepository of the PyShp GitHub site.\nReading Shapefiles\nThe Reader Class\nReading Shapefiles from Local Files\nTo read a shapefile create a new \"Reader\" object and pass it the name of an\nexisting shapefile. The shapefile format is actually a collection of three\nfiles. You specify the base filename of the shapefile or the complete filename\nof any of the shapefile component files.\n>>> sf = shapefile.Reader(\"shapefiles/blockgroups\")\n\nOR\n>>> sf = shapefile.Reader(\"shapefiles/blockgroups.shp\")\n\nOR\n>>> sf = shapefile.Reader(\"shapefiles/blockgroups.dbf\")\n\nOR any of the other 5+ formats which are potentially part of a shapefile. The\nlibrary does not care about file extensions. You can also specify that you only\nwant to read some of the file extensions through the use of keyword arguments:\n>>> sf = shapefile.Reader(dbf=\"shapefiles/blockgroups.dbf\")\n\nReading Shapefiles from Zip Files\nIf your shapefile is wrapped inside a zip file, the library is able to handle that too, meaning you don't have to worry about unzipping the contents:\n>>> sf = shapefile.Reader(\"shapefiles/blockgroups.zip\")\n\nIf the zip file contains multiple shapefiles, just specify which shapefile to read by additionally specifying the relative path after the \".zip\" part:\n>>> sf = shapefile.Reader(\"shapefiles/blockgroups_multishapefile.zip/blockgroups2.shp\")\n\nReading Shapefiles from URLs\nFinally, you can use all of the above methods to read shapefiles directly from the internet, by giving a url instead of a local path, e.g.:\n>>> # from a zipped shapefile on website\n>>> sf = shapefile.Reader(\"https://biogeo.ucdavis.edu/data/diva/rrd/NIC_rrd.zip\")\n\n>>> # from a shapefile collection of files in a github repository\n>>> sf = shapefile.Reader(\"https://github.com/nvkelso/natural-earth-vector/blob/master/110m_cultural/ne_110m_admin_0_tiny_countries.shp?raw=true\")\n\nThis will automatically download the file(s) to a temporary location before reading, saving you a lot of time and repetitive boilerplate code when you just want quick access to some external data.\nReading Shapefiles from File-Like Objects\nYou can also load shapefiles from any Python file-like object using keyword\narguments to specify any of the three files. This feature is very powerful and\nallows you to custom load shapefiles from arbitrary storage formats, such as a protected url or zip file, a serialized object, or in some cases a database.\n>>> myshp = open(\"shapefiles/blockgroups.shp\", \"rb\")\n>>> mydbf = open(\"shapefiles/blockgroups.dbf\", \"rb\")\n>>> r = shapefile.Reader(shp=myshp, dbf=mydbf)\n\nNotice in the examples above the shx file is never used. The shx file is a\nvery simple fixed-record index for the variable-length records in the shp\nfile. This file is optional for reading. If it's available PyShp will use the\nshx file to access shape records a little faster but will do just fine without\nit.\nReading Shapefiles Using the Context Manager\nThe \"Reader\" class can be used as a context manager, to ensure open file\nobjects are properly closed when done reading the data:\n>>> with shapefile.Reader(\"shapefiles/blockgroups.shp\") as shp:\n...     print(shp)\nshapefile Reader\n    663 shapes (type 'POLYGON')\n    663 records (44 fields)\n\nReading Shapefile Meta-Data\nShapefiles have a number of attributes for inspecting the file contents.\nA shapefile is a container for a specific type of geometry, and this can be checked using the\nshapeType attribute.\n>>> sf = shapefile.Reader(\"shapefiles/blockgroups.dbf\")\n>>> sf.shapeType\n5\n\nShape types are represented by numbers between 0 and 31 as defined by the\nshapefile specification and listed below. It is important to note that the numbering system has\nseveral reserved numbers that have not been used yet, therefore the numbers of\nthe existing shape types are not sequential:\n\nNULL = 0\nPOINT = 1\nPOLYLINE = 3\nPOLYGON = 5\nMULTIPOINT = 8\nPOINTZ = 11\nPOLYLINEZ = 13\nPOLYGONZ = 15\nMULTIPOINTZ = 18\nPOINTM = 21\nPOLYLINEM = 23\nPOLYGONM = 25\nMULTIPOINTM = 28\nMULTIPATCH = 31\n\nBased on this we can see that our blockgroups shapefile contains\nPolygon type shapes. The shape types are also defined as constants in\nthe shapefile module, so that we can compare types more intuitively:\n>>> sf.shapeType == shapefile.POLYGON\nTrue\n\nFor convenience, you can also get the name of the shape type as a string:\n>>> sf.shapeTypeName == 'POLYGON'\nTrue\n\nOther pieces of meta-data that we can check include the number of features\nand the bounding box area the shapefile covers:\n>>> len(sf)\n663\n>>> sf.bbox\n[-122.515048, 37.652916, -122.327622, 37.863433]\n\nFinally, if you would prefer to work with the entire shapefile in a different\nformat, you can convert all of it to a GeoJSON dictionary, although you may lose\nsome information in the process, such as z- and m-values:\n>>> sf.__geo_interface__['type']\n'FeatureCollection'\n\nReading Geometry\nA shapefile's geometry is the collection of points or shapes made from\nvertices and implied arcs representing physical locations. All types of\nshapefiles just store points. The metadata about the points determine how they\nare handled by software.\nYou can get a list of the shapefile's geometry by calling the shapes()\nmethod.\n>>> shapes = sf.shapes()\n\nThe shapes method returns a list of Shape objects describing the geometry of\neach shape record.\n>>> len(shapes)\n663\n\nTo read a single shape by calling its index use the shape() method. The index\nis the shape's count from 0. So to read the 8th shape record you would use its\nindex which is 7.\n>>> s = sf.shape(7)\n>>> s\nShape #7: POLYGON\n\n>>> # Read the bbox of the 8th shape to verify\n>>> # Round coordinates to 3 decimal places\n>>> ['%.3f' % coord for coord in s.bbox]\n['-122.450', '37.801', '-122.442', '37.808']\n\nEach shape record (except Points) contains the following attributes. Records of\nshapeType Point do not have a bounding box 'bbox'.\n>>> for name in dir(shapes[3]):\n...     if not name.startswith('_'):\n...         name\n'bbox'\n'oid'\n'parts'\n'points'\n'shapeType'\n'shapeTypeName'\n\n\n\noid: The shape's index position in the original shapefile.\n>>> shapes[3].oid\n3\n\n\n\nshapeType: an integer representing the type of shape as defined by the\nshapefile specification.\n>>> shapes[3].shapeType\n5\n\n\n\nshapeTypeName: a string representation of the type of shape as defined by shapeType. Read-only.\n>>> shapes[3].shapeTypeName\n'POLYGON'\n\n\n\nbbox: If the shape type contains multiple points this tuple describes the\nlower left (x,y) coordinate and upper right corner coordinate creating a\ncomplete box around the points. If the shapeType is a\nNull (shapeType == 0) then an AttributeError is raised.\n>>> # Get the bounding box of the 4th shape.\n>>> # Round coordinates to 3 decimal places\n>>> bbox = shapes[3].bbox\n>>> ['%.3f' % coord for coord in bbox]\n['-122.486', '37.787', '-122.446', '37.811']\n\n\n\nparts: Parts simply group collections of points into shapes. If the shape\nrecord has multiple parts this attribute contains the index of the first\npoint of each part. If there is only one part then a list containing 0 is\nreturned.\n>>> shapes[3].parts\n[0]\n\n\n\npoints: The points attribute contains a list of tuples containing an\n(x,y) coordinate for each point in the shape.\n>>> len(shapes[3].points)\n173\n>>> # Get the 8th point of the fourth shape\n>>> # Truncate coordinates to 3 decimal places\n>>> shape = shapes[3].points[7]\n>>> ['%.3f' % coord for coord in shape]\n['-122.471', '37.787']\n\n\n\nIn most cases, however, if you need to do more than just type or bounds checking, you may want\nto convert the geometry to the more human-readable GeoJSON format,\nwhere lines and polygons are grouped for you:\n>>> s = sf.shape(0)\n>>> geoj = s.__geo_interface__\n>>> geoj[\"type\"]\n'MultiPolygon'\n\nThe results from the shapes() method similarly supports converting to GeoJSON:\n>>> shapes.__geo_interface__['type']\n'GeometryCollection'\n\nNote: In some cases, if the conversion from shapefile geometry to GeoJSON encountered any problems\nor potential issues, a warning message will be displayed with information about the affected\ngeometry. To ignore or suppress these warnings, you can disable this behavior by setting the\nmodule constant VERBOSE to False:\n>>> shapefile.VERBOSE = False\n\nReading Records\nA record in a shapefile contains the attributes for each shape in the\ncollection of geometries. Records are stored in the dbf file. The link between\ngeometry and attributes is the foundation of all geographic information systems.\nThis critical link is implied by the order of shapes and corresponding records\nin the shp geometry file and the dbf attribute file.\nThe field names of a shapefile are available as soon as you read a shapefile.\nYou can call the \"fields\" attribute of the shapefile as a Python list. Each\nfield is a Python list with the following information:\n\nField name: the name describing the data at this column index.\nField type: the type of data at this column index. Types can be:\n\n\"C\": Characters, text.\n\"N\": Numbers, with or without decimals.\n\"F\": Floats (same as \"N\").\n\"L\": Logical, for boolean True/False values.\n\"D\": Dates.\n\"M\": Memo, has no meaning within a GIS and is part of the xbase spec instead.\n\n\nField length: the length of the data found at this column index. Older GIS\nsoftware may truncate this length to 8 or 11 characters for \"Character\"\nfields.\nDecimal length: the number of decimal places found in \"Number\" fields.\n\nTo see the fields for the Reader object above (sf) call the \"fields\"\nattribute:\n>>> fields = sf.fields\n\n>>> assert fields == [(\"DeletionFlag\", \"C\", 1, 0), [\"AREA\", \"N\", 18, 5],\n... [\"BKG_KEY\", \"C\", 12, 0], [\"POP1990\", \"N\", 9, 0], [\"POP90_SQMI\", \"N\", 10, 1],\n... [\"HOUSEHOLDS\", \"N\", 9, 0],\n... [\"MALES\", \"N\", 9, 0], [\"FEMALES\", \"N\", 9, 0], [\"WHITE\", \"N\", 9, 0],\n... [\"BLACK\", \"N\", 8, 0], [\"AMERI_ES\", \"N\", 7, 0], [\"ASIAN_PI\", \"N\", 8, 0],\n... [\"OTHER\", \"N\", 8, 0], [\"HISPANIC\", \"N\", 8, 0], [\"AGE_UNDER5\", \"N\", 8, 0],\n... [\"AGE_5_17\", \"N\", 8, 0], [\"AGE_18_29\", \"N\", 8, 0], [\"AGE_30_49\", \"N\", 8, 0],\n... [\"AGE_50_64\", \"N\", 8, 0], [\"AGE_65_UP\", \"N\", 8, 0],\n... [\"NEVERMARRY\", \"N\", 8, 0], [\"MARRIED\", \"N\", 9, 0], [\"SEPARATED\", \"N\", 7, 0],\n... [\"WIDOWED\", \"N\", 8, 0], [\"DIVORCED\", \"N\", 8, 0], [\"HSEHLD_1_M\", \"N\", 8, 0],\n... [\"HSEHLD_1_F\", \"N\", 8, 0], [\"MARHH_CHD\", \"N\", 8, 0],\n... [\"MARHH_NO_C\", \"N\", 8, 0], [\"MHH_CHILD\", \"N\", 7, 0],\n... [\"FHH_CHILD\", \"N\", 7, 0], [\"HSE_UNITS\", \"N\", 9, 0], [\"VACANT\", \"N\", 7, 0],\n... [\"OWNER_OCC\", \"N\", 8, 0], [\"RENTER_OCC\", \"N\", 8, 0],\n... [\"MEDIAN_VAL\", \"N\", 7, 0], [\"MEDIANRENT\", \"N\", 4, 0],\n... [\"UNITS_1DET\", \"N\", 8, 0], [\"UNITS_1ATT\", \"N\", 7, 0], [\"UNITS2\", \"N\", 7, 0],\n... [\"UNITS3_9\", \"N\", 8, 0], [\"UNITS10_49\", \"N\", 8, 0],\n... [\"UNITS50_UP\", \"N\", 8, 0], [\"MOBILEHOME\", \"N\", 7, 0]]\n\nThe first field of a dbf file is always a 1-byte field called \"DeletionFlag\",\nwhich indicates records that have been deleted but not removed. However,\nsince this flag is very rarely used, PyShp currently will return all records\nregardless of their deletion flag, and the flag is also not included in the list of\nrecord values. In other words, the DeletionFlag field has no real purpose, and\nshould in most cases be ignored. For instance, to get a list of all fieldnames:\n>>> fieldnames = [f[0] for f in sf.fields[1:]]\n\nYou can get a list of the shapefile's records by calling the records() method:\n>>> records = sf.records()\n\n>>> len(records)\n663\n\nTo read a single record call the record() method with the record's index:\n>>> rec = sf.record(3)\n\nEach record is a list-like Record object containing the values corresponding to each field in\nthe field list (except the DeletionFlag). A record's values can be accessed by positional indexing or slicing.\nFor example in the blockgroups shapefile the 2nd and 3rd fields are the blockgroup id\nand the 1990 population count of that San Francisco blockgroup:\n>>> rec[1:3]\n['060750601001', 4715]\n\nFor simpler access, the fields of a record can also accessed via the name of the field,\neither as a key or as an attribute name. The blockgroup id (BKG_KEY) of the blockgroups shapefile\ncan also be retrieved as:\n>>> rec['BKG_KEY']\n'060750601001'\n\n>>> rec.BKG_KEY\n'060750601001'\n\nThe record values can be easily integrated with other programs by converting it to a field-value dictionary:\n>>> dct = rec.as_dict()\n>>> sorted(dct.items())\n[('AGE_18_29', 1467), ('AGE_30_49', 1681), ('AGE_50_64', 92), ('AGE_5_17', 848), ('AGE_65_UP', 30), ('AGE_UNDER5', 597), ('AMERI_ES', 6), ('AREA', 2.34385), ('ASIAN_PI', 452), ('BKG_KEY', '060750601001'), ('BLACK', 1007), ('DIVORCED', 149), ('FEMALES', 2095), ('FHH_CHILD', 16), ('HISPANIC', 416), ('HOUSEHOLDS', 1195), ('HSEHLD_1_F', 40), ('HSEHLD_1_M', 22), ('HSE_UNITS', 1258), ('MALES', 2620), ('MARHH_CHD', 79), ('MARHH_NO_C', 958), ('MARRIED', 2021), ('MEDIANRENT', 739), ('MEDIAN_VAL', 337500), ('MHH_CHILD', 0), ('MOBILEHOME', 0), ('NEVERMARRY', 703), ('OTHER', 288), ('OWNER_OCC', 66), ('POP1990', 4715), ('POP90_SQMI', 2011.6), ('RENTER_OCC', 3733), ('SEPARATED', 49), ('UNITS10_49', 49), ('UNITS2', 160), ('UNITS3_9', 672), ('UNITS50_UP', 0), ('UNITS_1ATT', 302), ('UNITS_1DET', 43), ('VACANT', 93), ('WHITE', 2962), ('WIDOWED', 37)]\n\nIf at a later point you need to check the record's index position in the original\nshapefile, you can do this through the \"oid\" attribute:\n>>> rec.oid\n3\n\nReading Geometry and Records Simultaneously\nYou may want to examine both the geometry and the attributes for a record at\nthe same time. The shapeRecord() and shapeRecords() method let you do just\nthat.\nCalling the shapeRecords() method will return the geometry and attributes for\nall shapes as a list of ShapeRecord objects. Each ShapeRecord instance has a\n\"shape\" and \"record\" attribute. The shape attribute is a Shape object as\ndiscussed in the first section \"Reading Geometry\". The record attribute is a\nlist-like object containing field values as demonstrated in the \"Reading Records\" section.\n>>> shapeRecs = sf.shapeRecords()\n\nLet's read the blockgroup key and the population for the 4th blockgroup:\n>>> shapeRecs[3].record[1:3]\n['060750601001', 4715]\n\nThe results from the shapeRecords() method is a list-like object that can be easily converted\nto GeoJSON through the __geo_interface__:\n>>> shapeRecs.__geo_interface__['type']\n'FeatureCollection'\n\nThe shapeRecord() method reads a single shape/record pair at the specified index.\nTo get the 4th shape record from the blockgroups shapefile use the third index:\n>>> shapeRec = sf.shapeRecord(3)\n>>> shapeRec.record[1:3]\n['060750601001', 4715]\n\nEach individual shape record also supports the __geo_interface__ to convert it to a GeoJSON feature:\n>>> shapeRec.__geo_interface__['type']\n'Feature'\n\nWriting Shapefiles\nThe Writer Class\nPyShp tries to be as flexible as possible when writing shapefiles while\nmaintaining some degree of automatic validation to make sure you don't\naccidentally write an invalid file.\nPyShp can write just one of the component files such as the shp or dbf file\nwithout writing the others. So in addition to being a complete shapefile\nlibrary, it can also be used as a basic dbf (xbase) library. Dbf files are a\ncommon database format which are often useful as a standalone simple database\nformat. And even shp files occasionally have uses as a standalone format. Some\nweb-based GIS systems use an user-uploaded shp file to specify an area of\ninterest. Many precision agriculture chemical field sprayers also use the shp\nformat as a control file for the sprayer system (usually in combination with\ncustom database file formats).\nWriting Shapefiles to Local Files\nTo create a shapefile you begin by initiating a new Writer instance, passing it\nthe file path and name to save to:\n>>> w = shapefile.Writer('shapefiles/test/testfile')\n>>> w.field('field1', 'C')\n\nFile extensions are optional when reading or writing shapefiles. If you specify\nthem PyShp ignores them anyway. When you save files you can specify a base\nfile name that is used for all three file types. Or you can specify a name for\none or more file types:\n>>> w = shapefile.Writer(dbf='shapefiles/test/onlydbf.dbf')\n>>> w.field('field1', 'C')\n\nIn that case, any file types not assigned will not\nsave and only file types with file names will be saved.\nWriting Shapefiles to File-Like Objects\nJust as you can read shapefiles from python file-like objects you can also\nwrite to them:\n>>> try:\n...     from StringIO import StringIO\n... except ImportError:\n...     from io import BytesIO as StringIO\n>>> shp = StringIO()\n>>> shx = StringIO()\n>>> dbf = StringIO()\n>>> w = shapefile.Writer(shp=shp, shx=shx, dbf=dbf)\n>>> w.field('field1', 'C')\n>>> w.record()\n>>> w.null()\n>>> w.close()\n\n>>> # To read back the files you could call the \"StringIO.getvalue()\" method later.\n>>> assert shp.getvalue()\n>>> assert shx.getvalue()\n>>> assert dbf.getvalue()\n\n>>> # In fact, you can read directly from them using the Reader\n>>> r = shapefile.Reader(shp=shp, shx=shx, dbf=dbf)\n>>> len(r)\n1\n\nWriting Shapefiles Using the Context Manager\nThe \"Writer\" class automatically closes the open files and writes the final headers once it is garbage collected.\nIn case of a crash and to make the code more readable, it is nevertheless recommended\nyou do this manually by calling the \"close()\" method:\n>>> w.close()\n\nAlternatively, you can also use the \"Writer\" class as a context manager, to ensure open file\nobjects are properly closed and final headers written once you exit the with-clause:\n>>> with shapefile.Writer(\"shapefiles/test/contextwriter\") as w:\n... \tw.field('field1', 'C')\n... \tpass\n\nSetting the Shape Type\nThe shape type defines the type of geometry contained in the shapefile. All of\nthe shapes must match the shape type setting.\nThere are three ways to set the shape type:\n\nSet it when creating the class instance.\nSet it by assigning a value to an existing class instance.\nSet it automatically to the type of the first non-null shape by saving the shapefile.\n\nTo manually set the shape type for a Writer object when creating the Writer:\n>>> w = shapefile.Writer('shapefiles/test/shapetype', shapeType=3)\n>>> w.field('field1', 'C')\n\n>>> w.shapeType\n3\n\nOR you can set it after the Writer is created:\n>>> w.shapeType = 1\n\n>>> w.shapeType\n1\n\nAdding Records\nBefore you can add records you must first create the fields that define what types of\nvalues will go into each attribute.\nThere are several different field types, all of which support storing None values as NULL.\nText fields are created using the 'C' type, and the third 'size' argument can be customized to the expected\nlength of text values to save space:\n>>> w = shapefile.Writer('shapefiles/test/dtype')\n>>> w.field('TEXT', 'C')\n>>> w.field('SHORT_TEXT', 'C', size=5)\n>>> w.field('LONG_TEXT', 'C', size=250)\n>>> w.null()\n>>> w.record('Hello', 'World', 'World'*50)\n>>> w.close()\n\n>>> r = shapefile.Reader('shapefiles/test/dtype')\n>>> assert r.record(0) == ['Hello', 'World', 'World'*50]\n\nDate fields are created using the 'D' type, and can be created using either\ndate objects, lists, or a YYYYMMDD formatted string.\nField length or decimal have no impact on this type:\n>>> from datetime import date\n>>> w = shapefile.Writer('shapefiles/test/dtype')\n>>> w.field('DATE', 'D')\n>>> w.null()\n>>> w.null()\n>>> w.null()\n>>> w.null()\n>>> w.record(date(1898,1,30))\n>>> w.record([1998,1,30])\n>>> w.record('19980130')\n>>> w.record(None)\n>>> w.close()\n\n>>> r = shapefile.Reader('shapefiles/test/dtype')\n>>> assert r.record(0) == [date(1898,1,30)]\n>>> assert r.record(1) == [date(1998,1,30)]\n>>> assert r.record(2) == [date(1998,1,30)]\n>>> assert r.record(3) == [None]\n\nNumeric fields are created using the 'N' type (or the 'F' type, which is exactly the same).\nBy default the fourth decimal argument is set to zero, essentially creating an integer field.\nTo store floats you must set the decimal argument to the precision of your choice.\nTo store very large numbers you must increase the field length size to the total number of digits\n(including comma and minus).\n>>> w = shapefile.Writer('shapefiles/test/dtype')\n>>> w.field('INT', 'N')\n>>> w.field('LOWPREC', 'N', decimal=2)\n>>> w.field('MEDPREC', 'N', decimal=10)\n>>> w.field('HIGHPREC', 'N', decimal=30)\n>>> w.field('FTYPE', 'F', decimal=10)\n>>> w.field('LARGENR', 'N', 101)\n>>> nr = 1.3217328\n>>> w.null()\n>>> w.null()\n>>> w.record(INT=nr, LOWPREC=nr, MEDPREC=nr, HIGHPREC=-3.2302e-25, FTYPE=nr, LARGENR=int(nr)*10**100)\n>>> w.record(None, None, None, None, None, None)\n>>> w.close()\n\n>>> r = shapefile.Reader('shapefiles/test/dtype')\n>>> assert r.record(0) == [1, 1.32, 1.3217328, -3.2302e-25, 1.3217328, 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000]\n>>> assert r.record(1) == [None, None, None, None, None, None]\n\nFinally, we can create boolean fields by setting the type to 'L'.\nThis field can take True or False values, or 1 (True) or 0 (False).\nNone is interpreted as missing.\n>>> w = shapefile.Writer('shapefiles/test/dtype')\n>>> w.field('BOOLEAN', 'L')\n>>> w.null()\n>>> w.null()\n>>> w.null()\n>>> w.null()\n>>> w.null()\n>>> w.null()\n>>> w.record(True)\n>>> w.record(1)\n>>> w.record(False)\n>>> w.record(0)\n>>> w.record(None)\n>>> w.record(\"Nonesense\")\n>>> w.close()\n\n>>> r = shapefile.Reader('shapefiles/test/dtype')\n>>> r.record(0)\nRecord #0: [True]\n>>> r.record(1)\nRecord #1: [True]\n>>> r.record(2)\nRecord #2: [False]\n>>> r.record(3)\nRecord #3: [False]\n>>> r.record(4)\nRecord #4: [None]\n>>> r.record(5)\nRecord #5: [None]\n\nYou can also add attributes using keyword arguments where the keys are field names.\n>>> w = shapefile.Writer('shapefiles/test/dtype')\n>>> w.field('FIRST_FLD','C','40')\n>>> w.field('SECOND_FLD','C','40')\n>>> w.null()\n>>> w.null()\n>>> w.record('First', 'Line')\n>>> w.record(FIRST_FLD='First', SECOND_FLD='Line')\n>>> w.close()\n\nAdding Geometry\nGeometry is added using one of several convenience methods. The \"null\" method is used\nfor null shapes, \"point\" is used for point shapes, \"multipoint\" is used for multipoint shapes, \"line\" for lines,\n\"poly\" for polygons.\nAdding a Null shape\nA shapefile may contain some records for which geometry is not available, and may be set using the \"null\" method.\nBecause Null shape types (shape type 0) have no geometry the \"null\" method is called without any arguments.\n>>> w = shapefile.Writer('shapefiles/test/null')\n>>> w.field('name', 'C')\n\n>>> w.null()\n>>> w.record('nullgeom')\n\n>>> w.close()\n\nAdding a Point shape\nPoint shapes are added using the \"point\" method. A point is specified by an x and\ny value.\n>>> w = shapefile.Writer('shapefiles/test/point')\n>>> w.field('name', 'C')\n\n>>> w.point(122, 37) \n>>> w.record('point1')\n\n>>> w.close()\n\nAdding a MultiPoint shape\nIf your point data allows for the possibility of multiple points per feature, use \"multipoint\" instead.\nThese are specified as a list of xy point coordinates.\n>>> w = shapefile.Writer('shapefiles/test/multipoint')\n>>> w.field('name', 'C')\n\n>>> w.multipoint([[122,37], [124,32]]) \n>>> w.record('multipoint1')\n\n>>> w.close()\n\nAdding a LineString shape\nFor LineString shapefiles, each shape is given as a list of one or more linear features.\nEach of the linear features must have at least two points.\n>>> w = shapefile.Writer('shapefiles/test/line')\n>>> w.field('name', 'C')\n\n>>> w.line([\n...\t\t\t[[1,5],[5,5],[5,1],[3,3],[1,1]], # line 1\n...\t\t\t[[3,2],[2,6]] # line 2\n...\t\t\t])\n\n>>> w.record('linestring1')\n\n>>> w.close()\n\nAdding a Polygon shape\nSimilarly to LineString, Polygon shapes consist of multiple polygons, and must be given as a list of polygons.\nThe main difference is that polygons must have at least 4 points and the last point must be the same as the first.\nIt's also okay if you forget to repeat the first point at the end; PyShp automatically checks and closes the polygons\nif you don't.\nIt's important to note that for Polygon shapefiles, your polygon coordinates must be ordered in a clockwise direction.\nIf any of the polygons have holes, then the hole polygon coordinates must be ordered in a counterclockwise direction.\nThe direction of your polygons determines how shapefile readers will distinguish between polygon outlines and holes.\n>>> w = shapefile.Writer('shapefiles/test/polygon')\n>>> w.field('name', 'C')\n\n>>> w.poly([\n...\t        [[113,24], [112,32], [117,36], [122,37], [118,20]], # poly 1\n...\t        [[116,29],[116,26],[119,29],[119,32]], # hole 1\n...         [[15,2], [17,6], [22,7]]  # poly 2\n...        ])\n>>> w.record('polygon1')\n\n>>> w.close()\n\nAdding from an existing Shape object\nFinally, geometry can be added by passing an existing \"Shape\" object to the \"shape\" method.\nYou can also pass it any GeoJSON dictionary or __geo_interface__ compatible object.\nThis can be particularly useful for copying from one file to another:\n>>> r = shapefile.Reader('shapefiles/test/polygon')\n\n>>> w = shapefile.Writer('shapefiles/test/copy')\n>>> w.fields = r.fields[1:] # skip first deletion field\n\n>>> # adding existing Shape objects\n>>> for shaperec in r.iterShapeRecords():\n...     w.record(*shaperec.record)\n...     w.shape(shaperec.shape)\n\n>>> # or GeoJSON dicts\n>>> for shaperec in r.iterShapeRecords():\n...     w.record(*shaperec.record)\n...     w.shape(shaperec.shape.__geo_interface__)\n\n>>> w.close()\t\n\nGeometry and Record Balancing\nBecause every shape must have a corresponding record it is critical that the\nnumber of records equals the number of shapes to create a valid shapefile. You\nmust take care to add records and shapes in the same order so that the record\ndata lines up with the geometry data. For example:\n>>> w = shapefile.Writer('shapefiles/test/balancing', shapeType=shapefile.POINT)\n>>> w.field(\"field1\", \"C\")\n>>> w.field(\"field2\", \"C\")\n\n>>> w.record(\"row\", \"one\")\n>>> w.point(1, 1)\n\n>>> w.record(\"row\", \"two\")\n>>> w.point(2, 2)\n\nTo help prevent accidental misalignment PyShp has an \"auto balance\" feature to\nmake sure when you add either a shape or a record the two sides of the\nequation line up. This way if you forget to update an entry the\nshapefile will still be valid and handled correctly by most shapefile\nsoftware. Autobalancing is NOT turned on by default. To activate it set\nthe attribute autoBalance to 1 or True:\n>>> w.autoBalance = 1\n>>> w.record(\"row\", \"three\")\n>>> w.record(\"row\", \"four\")\n>>> w.point(4, 4)\n\n>>> w.recNum == w.shpNum\nTrue\n\nYou also have the option of manually calling the balance() method at any time\nto ensure the other side is up to date. When balancing is used\nnull shapes are created on the geometry side or records\nwith a value of \"NULL\" for each field is created on the attribute side.\nThis gives you flexibility in how you build the shapefile.\nYou can create all of the shapes and then create all of the records or vice versa.\n>>> w.autoBalance = 0\n>>> w.record(\"row\", \"five\")\n>>> w.record(\"row\", \"six\")\n>>> w.record(\"row\", \"seven\")\n>>> w.point(5, 5)\n>>> w.point(6, 6)\n>>> w.balance()\n\n>>> w.recNum == w.shpNum\nTrue\n\nIf you do not use the autoBalance() or balance() method and forget to manually\nbalance the geometry and attributes the shapefile will be viewed as corrupt by\nmost shapefile software.\nWriting .prj files\nA .prj file, or projection file, is a simple text file that stores a shapefile's map projection and coordinate reference system to help mapping software properly locate the geometry on a map. If you don't have one, you may get confusing errors when you try and use the shapefile you created. The GIS software may complain that it doesn't know the shapefile's projection and refuse to accept it, it may assume the shapefile is the same projection as the rest of your GIS project and put it in the wrong place, or it might assume the coordinates are an offset in meters from latitude and longitude 0,0 which will put your data in the middle of the ocean near Africa. The text in the .prj file is a Well-Known-Text (WKT) projection string. Projection strings can be quite long so they are often referenced using numeric codes call EPSG codes. The .prj file must have the same base name as your shapefile. So for example if you have a shapefile named \"myPoints.shp\", the .prj file must be named \"myPoints.prj\".\nIf you're using the same projection over and over, the following is a simple way to create the .prj file assuming your base filename is stored in a variable called \"filename\":\n>>> with open(\"{}.prj\".format(filename), \"w\") as prj:\n>>>     wkt = 'GEOGCS[\"WGS 84\",'\n>>>     wkt += 'DATUM[\"WGS_1984\",'\n>>>     wkt += 'SPHEROID[\"WGS 84\",6378137,298.257223563]]'\n>>>     wkt += ',PRIMEM[\"Greenwich\",0],'\n>>>     wkt += 'UNIT[\"degree\",0.0174532925199433]]'\n>>>     prj.write(wkt)\n\nIf you need to dynamically fetch WKT projection strings, you can use the pure Python PyCRS module which has a number of useful features.\nAdvanced Use\nCommon Errors and Fixes\nBelow we list some commonly encountered errors and ways to fix them.\nWarnings and Logging\nBy default, PyShp chooses to be transparent and provide the user with logging information and warnings about non-critical issues when reading or writing shapefiles. This behavior is controlled by the module constant VERBOSE (which defaults to True). If you would rather suppress this information, you can simply set this to False:\n>>> shapefile.VERBOSE = False\n\nAll logging happens under the namespace shapefile. So another way to suppress all PyShp warnings would be to alter the logging behavior for that namespace:\n>>> import logging\n>>> logging.getLogger('shapefile').setLevel(logging.ERROR)\n\nShapefile Encoding Errors\nPyShp supports reading and writing shapefiles in any language or character encoding, and provides several options for decoding and encoding text.\nMost shapefiles are written in UTF-8 encoding, PyShp's default encoding, so in most cases you don't have to specify the encoding.\nIf you encounter an encoding error when reading a shapefile, this means the shapefile was likely written in a non-utf8 encoding.\nFor instance, when working with English language shapefiles, a common reason for encoding errors is that the shapefile was written in Latin-1 encoding.\nFor reading shapefiles in any non-utf8 encoding, such as Latin-1, just\nsupply the encoding option when creating the Reader class.\n>>> r = shapefile.Reader(\"shapefiles/test/latin1.shp\", encoding=\"latin1\")\n>>> r.record(0) == [2, u'\u00d1and\u00fa']\nTrue\n\nOnce you have loaded the shapefile, you may choose to save it using another more supportive encoding such\nas UTF-8. Assuming the new encoding supports the characters you are trying to write, reading it back in\nshould give you the same unicode string you started with.\n>>> w = shapefile.Writer(\"shapefiles/test/latin_as_utf8.shp\", encoding=\"utf8\")\n>>> w.fields = r.fields[1:]\n>>> w.record(*r.record(0))\n>>> w.null()\n>>> w.close()\n\n>>> r = shapefile.Reader(\"shapefiles/test/latin_as_utf8.shp\", encoding=\"utf8\")\n>>> r.record(0) == [2, u'\u00d1and\u00fa']\nTrue\n\nIf you supply the wrong encoding and the string is unable to be decoded, PyShp will by default raise an\nexception. If however, on rare occasion, you are unable to find the correct encoding and want to ignore\nor replace encoding errors, you can specify the \"encodingErrors\" to be used by the decode method. This\napplies to both reading and writing.\n>>> r = shapefile.Reader(\"shapefiles/test/latin1.shp\", encoding=\"ascii\", encodingErrors=\"replace\")\n>>> r.record(0) == [2, u'\ufffdand\ufffd']\nTrue\n\nReading Large Shapefiles\nDespite being a lightweight library, PyShp is designed to be able to read shapefiles of any size, allowing you to work with hundreds of thousands or even millions\nof records and complex geometries.\nIterating through a shapefile\nAs an example, let's load this Natural Earth shapefile of more than 4000 global administrative boundary polygons:\n>>> sf = shapefile.Reader(\"https://github.com/nvkelso/natural-earth-vector/blob/master/10m_cultural/ne_10m_admin_1_states_provinces?raw=true\")\n\nWhen first creating the Reader class, the library only reads the header information\nand leaves the rest of the file contents alone. Once you call the records() and shapes()\nmethods however, it will attempt to read the entire file into memory at once.\nFor very large files this can result in MemoryError. So when working with large files\nit is recommended to use instead the iterShapes(), iterRecords(), or iterShapeRecords()\nmethods instead. These iterate through the file contents one at a time, enabling you to loop\nthrough them while keeping memory usage at a minimum.\n>>> for shape in sf.iterShapes():\n...     # do something here\n...     pass\n\n>>> for rec in sf.iterRecords():\n...     # do something here\n...     pass\n\n>>> for shapeRec in sf.iterShapeRecords():\n...     # do something here\n...     pass\n\n>>> for shapeRec in sf: # same as iterShapeRecords()\n...     # do something here\n...     pass\n\nLimiting which fields to read\nBy default when reading the attribute records of a shapefile, pyshp unpacks and returns the data for all of the dbf fields, regardless of whether you actually need that data or not. To limit which field data is unpacked when reading each record and speed up processing time, you can specify the fields argument to any of the methods involving record data. Note that the order of the specified fields does not matter, the resulting records will list the specified field values in the order that they appear in the original dbf file. For instance, if we are only interested in the country and name of each admin unit, the following is a more efficient way of iterating through the file:\n>>> fields = [\"geonunit\", \"name\"]\n>>> for rec in sf.iterRecords(fields=fields):\n... \t# do something\n... \tpass\n>>> rec\nRecord #4595: ['Birgu', 'Malta']\n\nAttribute filtering\nIn many cases, we aren't interested in all entries of a shapefile, but rather only want to retrieve a small subset of records by filtering on some attribute. To avoid wasting time reading records and shapes that we don't need, we can start by iterating only the records and fields of interest, check if the record matches some condition as a way to filter the data, and finally load the full record and shape geometry for those that meet the condition:\n>>> filter_field = \"geonunit\"\n>>> filter_value = \"Eritrea\"\n>>> for rec in sf.iterRecords(fields=[filter_field]):\n...     if rec[filter_field] == filter_value:\n... \t\t# load full record and shape\n... \t\tshapeRec = sf.shapeRecord(rec.oid)\n... \t\tshapeRec.record[\"name\"]\n'Debubawi Keyih Bahri'\n'Debub'\n'Semenawi Keyih Bahri'\n'Gash Barka'\n'Maekel'\n'Anseba'\n\nSelectively reading only the necessary data in this way is particularly useful for efficiently processing a limited subset of data from very large files or when looping through a large number of files, especially if they contain large attribute tables or complex shape geometries.\nSpatial filtering\nAnother common use-case is that we only want to read those records that are located in some region of interest. Because the shapefile stores the bounding box of each shape separately from the geometry data, it's possible to quickly retrieve all shapes that might overlap a given bounding box region without having to load the full shape geometry data for every shape. This can be done by specifying the bbox argument to any of the record or shape methods:\n>>> bbox = [36.423, 12.360, 43.123, 18.004] # ca bbox of Eritrea\n>>> fields = [\"geonunit\",\"name\"]\n>>> for shapeRec in sf.iterShapeRecords(bbox=bbox, fields=fields):\n... \tshapeRec.record\nRecord #368: ['Afar', 'Ethiopia']\nRecord #369: ['Tadjourah', 'Djibouti']\nRecord #375: ['Obock', 'Djibouti']\nRecord #376: ['Debubawi Keyih Bahri', 'Eritrea']\nRecord #1106: ['Amhara', 'Ethiopia']\nRecord #1107: ['Gedarif', 'Sudan']\nRecord #1108: ['Tigray', 'Ethiopia']\nRecord #1414: ['Sa`dah', 'Yemen']\nRecord #1415: ['`Asir', 'Saudi Arabia']\nRecord #1416: ['Hajjah', 'Yemen']\nRecord #1417: ['Jizan', 'Saudi Arabia']\nRecord #1598: ['Debub', 'Eritrea']\nRecord #1599: ['Red Sea', 'Sudan']\nRecord #1600: ['Semenawi Keyih Bahri', 'Eritrea']\nRecord #1601: ['Gash Barka', 'Eritrea']\nRecord #1602: ['Kassala', 'Sudan']\nRecord #1603: ['Maekel', 'Eritrea']\nRecord #2037: ['Al Hudaydah', 'Yemen']\nRecord #3741: ['Anseba', 'Eritrea']\n\nThis functionality means that shapefiles can be used as a bare-bones spatially indexed database, with very fast bounding box queries for even the largest of shapefiles. Note that, as with all spatial indexing, this method does not guarantee that the geometries of the resulting matches overlap the queried region, only that their bounding boxes overlap.\nWriting large shapefiles\nSimilar to the Reader class, the shapefile Writer class uses a streaming approach to keep memory\nusage at a minimum and allow writing shapefiles of arbitrarily large sizes. The library takes care of this under-the-hood by immediately\nwriting each geometry and record to disk the moment they\nare added using shape() or record(). Once the writer is closed, exited, or garbage\ncollected, the final header information is calculated and written to the beginning of\nthe file.\nMerging multiple shapefiles\nThis means that it's possible to merge hundreds or thousands of shapefiles, as\nlong as you iterate through the source files to avoid loading everything into\nmemory. The following example copies the contents of a shapefile to a new file 10 times:\n>>> # create writer\n>>> w = shapefile.Writer('shapefiles/test/merge')\n\n>>> # copy over fields from the reader\n>>> r = shapefile.Reader(\"shapefiles/blockgroups\")\n>>> for field in r.fields[1:]:\n...     w.field(*field)\n\n>>> # copy the shapefile to writer 10 times\n>>> repeat = 10\n>>> for i in range(repeat):\n...     r = shapefile.Reader(\"shapefiles/blockgroups\")\n...     for shapeRec in r.iterShapeRecords():\n...         w.record(*shapeRec.record)\n...         w.shape(shapeRec.shape)\n\n>>> # check that the written file is 10 times longer\n>>> len(w) == len(r) * 10\nTrue\n\n>>> # close the writer\n>>> w.close()\n\nIn this trivial example, we knew that all files had the exact same field names, ordering, and types. In other scenarios, you will have to additionally make sure that all shapefiles have the exact same fields in the same order, and that they all contain the same geometry type.\nEditing shapefiles\nIf you need to edit a shapefile you would have to read the\nfile one record at a time, modify or filter the contents, and write it back out. For instance, to create a copy of a shapefile that only keeps a subset of relevant fields:\n>>> # create writer\n>>> w = shapefile.Writer('shapefiles/test/edit')\n\n>>> # define which fields to keep\n>>> keep_fields = ['BKG_KEY', 'MEDIANRENT']\n\n>>> # copy over the relevant fields from the reader\n>>> r = shapefile.Reader(\"shapefiles/blockgroups\")\n>>> for field in r.fields[1:]:\n...     if field[0] in keep_fields:\n...         w.field(*field)\n\n>>> # write only the relevant attribute values\n>>> for shapeRec in r.iterShapeRecords(fields=keep_fields):\n...     w.record(*shapeRec.record)\n...     w.shape(shapeRec.shape)\n\n>>> # close writer\n>>> w.close()\n\n3D and Other Geometry Types\nMost shapefiles store conventional 2D points, lines, or polygons. But the shapefile format is also capable\nof storing various other types of geometries as well, including complex 3D surfaces and objects.\nShapefiles with measurement (M) values\nMeasured shape types are shapes that include a measurement value at each vertex, for instance\nspeed measurements from a GPS device. Shapes with measurement (M) values are added with the following\nmethods: \"pointm\", \"multipointm\", \"linem\", and \"polygonm\". The M-values are specified by adding a\nthird M value to each XY coordinate. Missing or unobserved M-values are specified with a None value,\nor by simply omitting the third M-coordinate.\n>>> w = shapefile.Writer('shapefiles/test/linem')\n>>> w.field('name', 'C')\n\n>>> w.linem([\n...\t\t\t[[1,5,0],[5,5],[5,1,3],[3,3,None],[1,1,0]], # line with one omitted and one missing M-value\n...\t\t\t[[3,2],[2,6]] # line without any M-values\n...\t\t\t])\n\n>>> w.record('linem1')\n\n>>> w.close()\n\nShapefiles containing M-values can be examined in several ways:\n>>> r = shapefile.Reader('shapefiles/test/linem')\n\n>>> r.mbox # the lower and upper bound of M-values in the shapefile\n[0.0, 3.0]\n\n>>> r.shape(0).m # flat list of M-values\n[0.0, None, 3.0, None, 0.0, None, None]\n\nShapefiles with elevation (Z) values\nElevation shape types are shapes that include an elevation value at each vertex, for instance elevation from a GPS device.\nShapes with elevation (Z) values are added with the following methods: \"pointz\", \"multipointz\", \"linez\", and \"polyz\".\nThe Z-values are specified by adding a third Z value to each XY coordinate. Z-values do not support the concept of missing data,\nbut if you omit the third Z-coordinate it will default to 0. Note that Z-type shapes also support measurement (M) values added\nas a fourth M-coordinate. This too is optional.\n>>> w = shapefile.Writer('shapefiles/test/linez')\n>>> w.field('name', 'C')\n\n>>> w.linez([\n...\t\t\t[[1,5,18],[5,5,20],[5,1,22],[3,3],[1,1]], # line with some omitted Z-values\n...\t\t\t[[3,2],[2,6]], # line without any Z-values\n...\t\t\t[[3,2,15,0],[2,6,13,3],[1,9,14,2]] # line with both Z- and M-values\n...\t\t\t])\n\n>>> w.record('linez1')\n\n>>> w.close()\n\nTo examine a Z-type shapefile you can do:\n>>> r = shapefile.Reader('shapefiles/test/linez')\n\n>>> r.zbox # the lower and upper bound of Z-values in the shapefile\n[0.0, 22.0]\n\n>>> r.shape(0).z # flat list of Z-values\n[18.0, 20.0, 22.0, 0.0, 0.0, 0.0, 0.0, 15.0, 13.0, 14.0]\n\n3D MultiPatch Shapefiles\nMultipatch shapes are useful for storing composite 3-Dimensional objects.\nA MultiPatch shape represents a 3D object made up of one or more surface parts.\nEach surface in \"parts\" is defined by a list of XYZM values (Z and M values optional), and its corresponding type is\ngiven in the \"partTypes\" argument. The part type decides how the coordinate sequence is to be interpreted, and can be one\nof the following module constants: TRIANGLE_STRIP, TRIANGLE_FAN, OUTER_RING, INNER_RING, FIRST_RING, or RING.\nFor instance, a TRIANGLE_STRIP may be used to represent the walls of a building, combined with a TRIANGLE_FAN to represent\nits roof:\n>>> from shapefile import TRIANGLE_STRIP, TRIANGLE_FAN\n\n>>> w = shapefile.Writer('shapefiles/test/multipatch')\n>>> w.field('name', 'C')\n\n>>> w.multipatch([\n...\t\t\t\t [[0,0,0],[0,0,3],[5,0,0],[5,0,3],[5,5,0],[5,5,3],[0,5,0],[0,5,3],[0,0,0],[0,0,3]], # TRIANGLE_STRIP for house walls\n...\t\t\t\t [[2.5,2.5,5],[0,0,3],[5,0,3],[5,5,3],[0,5,3],[0,0,3]], # TRIANGLE_FAN for pointed house roof\n...\t\t\t\t ],\n...\t\t\t\t partTypes=[TRIANGLE_STRIP, TRIANGLE_FAN]) # one type for each part\n\n>>> w.record('house1')\n\n>>> w.close()\n\nFor an introduction to the various multipatch part types and examples of how to create 3D MultiPatch objects see this\nESRI White Paper.\nTesting\nThe testing framework is pytest, and the tests are located in test_shapefile.py.\nThis includes an extensive set of unit tests of the various pyshp features,\nand tests against various input data. Some of the tests that require\ninternet connectivity will be skipped in offline testing environments.\nIn the same folder as README.md and shapefile.py, from the command line run\n$ python -m pytest\n\nAdditionally, all the code and examples located in this file, README.md,\nis tested and verified with the builtin doctest framework.\nA special routine for invoking the doctest is run when calling directly on shapefile.py.\nIn the same folder as README.md and shapefile.py, from the command line run\n$ python shapefile.py\n\nLinux/Mac and similar platforms will need to run $ dos2unix README.md in order\nto correct line endings in README.md.\nContributors\nAtle Frenvik Sveen\nBas Couwenberg\nBen Beasley\nCasey Meisenzahl\nCharles Arnold\nDavid A. Riggs\ndavidh-ssec\nEvan Heidtmann\nezcitron\nfiveham\ngeospatialpython\nHannes\nIgnacio Martinez Vazquez\nJason Moujaes\nJonty Wareing\nKarim Bahgat\nkaranrn\nKyle Kelley\nLouis Tiao\nMarcin Cuprjak\nmcuprjak\nMicah Cochran\nMichael Davis\nMichal \u010ciha\u0159\nMike Toews\nMiroslav \u0160ediv\u00fd\nNilo\npakoun\nPaulo Ernesto\nRaynor Vliegendhart\nRazzi Abuissa\nRosBer97\nRoss Rogers\nRyan Brideau\nTim Gates\nTobias Megies\nTommi Penttinen\nUli K\u00f6hler\nVsevolod Novikov\nZac Miller\n\n\n\n", "description": "Reads and writes Esri Shapefiles in pure Python."}, {"name": "pyprover", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nPyProver\nUsage\nExamples\nCompiling PyProver\n\n\n\n\n\nREADME.md\n\n\n\n\nPyProver\nPyProver is a resolution theorem prover for first-order predicate logic. PyProver is written in Coconut which compiles to pure, universal Python, allowing PyProver to work on any Python version.\nInstalling PyProver is as simple as\npip install pyprover\n\nUsage\nTo use PyProver from a Python interpreter, it is recommended to\nfrom pyprover import *\nwhich will populate the global namespace with capital letters as propositions/predicates, and lowercase letters as constants/variables/functions. When using PyProver from a Python file, however, it is recommended to only import what you need.\nFormulas can be constructed using the built-in Python operators on propositions and terms combined with Exists (or EX), ForAll (or FA), Eq, top, and bot. For example:\nA & B\nA | ~B\n~(A | B)\nP >> Q\nP >> (Q >> P)\n(F & G) >> H\nE >> top\nbot >> E\nFA(x, F(x))\nTE(x, F(x) | G(x))\nFA(x, F(f(x)) >> F(x))\nEq(a, b)\nAlternatively, the expr(formula) function can be used, which parses a formula in standard mathematical notation. For example:\nF \u2227 G \u2228 (C \u2192 \u00acD)\nF /\\ G \\/ (C -> ~D)\nF & G | (C -> -D)\n\u22a4 \u2227 \u22a5\ntop /\\ bot\nF -> G -> H\nA x. F(x) /\\ G(x)\n\u2200x. F(x) /\\ G(x)\nE x. C(x) \\/ D(x)\n\u2203x. C(x) \\/ D(x)\n\u2200x. \u2203y. G(f(x, y))\na = b\nforall x: A, B(x)\nexists x: A, B(x)\n\nNote that expr requires propositions/predicates to start with a capital letter and constants/variables/functions to start with a lowercase letter.\nOnce a formula has been constructed, various functions are provided to work with them. Some of the most important of these are:\n\nstrict_simplify(expr) finds an equivalent, standardized version of the given expr,\nsimplify(expr) is the same as strict_simplify, but it implicitly assumes TE(x, top) (something exists),\nstrict_proves(givens, concl) determines if concl can be derived from givens, and\nproves(givens, concl) is the same as strict_proves, but it implicitly assumes TE(x, top) (something exists).\n\nTo construct additional propositions/predicates, the function props(\"name1 name2 name3 ...\") will return propositions/predicates for the given names, and to construct additional constants/variables/functions, the function terms(\"name1 name2 name3 ...\") can be used similarly.\nExamples\nThe backtick infix syntax here is from Coconut. If using Python instead simply adjust to standard function call syntax.\nfrom pyprover import *\n\n# constructive propositional logic\nassert (E, E>>F, F>>G) `proves` G\nassert (E>>F, F>>G) `proves` E>>G\n\n# classical propositional logic\nassert ~~E `proves` E\nassert top `proves` (E>>F)|(F>>E)\n\n# constructive predicate logic\nassert R(j) `proves` TE(x, R(x))\nassert (FA(x, R(x) >> S(x)), TE(y, R(y))) `proves` TE(z, S(z))\n\n# classical predicate logic\nassert ~FA(x, R(x)) `proves` TE(y, ~R(y))\nassert top `proves` TE(x, D(x)) | FA(x, ~D(x))\n\n# use of expr parser\nassert expr(r\"A x. E y. F(x) \\/ G(y)\") == FA(x, TE(y, F(x) | G(y)))\nassert expr(r\"a = b /\\ b = c\") == Eq(a, b) & Eq(b, c)\nCompiling PyProver\nIf you want to compile PyProver yourself instead of installing it from PyPI with pip, you can\n\nclone the git repository,\nrun make setup, and\nrun make install.\n\n\n\n", "description": "Resolution theorem prover for first-order predicate logic."}, {"name": "pyproj", "readme": "\n\n\n\n\n\n\n\n\n\n\n\npyproj\nDocumentation\nBugs/Questions\nContributors \u2728\n\n\n\n\n\nREADME.md\n\n\n\n\n\npyproj\nPython interface to PROJ (cartographic projections and coordinate transformations library).\n\n\n\n\n\n\n\n\n\n\n\n\n\nDocumentation\n\nStable: http://pyproj4.github.io/pyproj/stable/\nLatest: https://pyproj4.github.io/pyproj/latest/\n\nBugs/Questions\n\nReport bugs/feature requests: https://github.com/pyproj4/pyproj/issues\nAsk questions: https://github.com/pyproj4/pyproj/discussions\nAsk developer questions: https://gitter.im/pyproj4-pyproj/community\nAsk the GIS community: https://gis.stackexchange.com/questions/tagged/pyproj\n\nContributors \u2728\nThanks goes to these wonderful people (emoji key):\n\n\nJeff Whitaker\ud83d\udcd6 \u26a0\ufe0f \ud83d\udcbb \ud83d\udca1 \ud83e\udd14 \ud83d\udc40 \ud83d\udcac \ud83d\udea7 \ud83d\ude87 \ud83d\udc1b\nAlan D. Snow\ud83d\udcd6 \u26a0\ufe0f \ud83d\udcbb \ud83d\udca1 \ud83d\udea7 \ud83d\ude87 \ud83e\udd14 \ud83d\udc40 \ud83d\udcac \ud83d\udc1b\nMicah Cochran\ud83d\udcd6 \u26a0\ufe0f \ud83d\udcbb \ud83d\udea7 \ud83d\ude87 \ud83d\udc40 \ud83d\udcac \ud83d\udc1b\nJoris Van den Bossche\ud83d\udcd6 \ud83d\udcbb \ud83e\udd14 \ud83d\udc40 \ud83d\udcac \ud83d\udc1b \u26a0\ufe0f\nChris Mayo\u26a0\ufe0f\nCharles Karney\ud83d\udcbb \u26a0\ufe0f\nJustin Dearing\ud83d\ude87\n\n\nJos de Kloe\ud83d\udcbb \u26a0\ufe0f \ud83d\udc1b\nGeorge Ouzounoudis\ud83d\udcbb \ud83e\udd14\nDavid Hoese\ud83d\udc40 \ud83e\udd14 \ud83d\udce6 \ud83d\udcd6 \u26a0\ufe0f \ud83d\udcbb\nMikhail Itkin\ud83d\udcbb\nRyan May\ud83d\udcbb\nartttt\ud83e\udd14\nFilipe\ud83d\ude87 \ud83d\udcbb \ud83d\udce6 \ud83d\udcd6\n\n\nHeitor\ud83d\udcd6\nBas Couwenberg\ud83d\udcbb \ud83d\udce6 \u26a0\ufe0f\nNick Eubank\ud83d\udcbb\nMichael Dunphy\ud83d\udcd6\nMatthew Brett\ud83d\ude87 \ud83d\udce6\nJakob de Maeyer \ud83d\udcbb\nThe Gitter Badger\ud83d\udcd6\n\n\nBernhard M. Wiedemann\ud83d\udcbb\nMarco Aur\u00e9lio da Costa\ud83d\udcbb\nChristopher H. Barker\ud83d\udcbb\nKristian Evers\ud83d\udcac \ud83e\udd14 \ud83d\udcd6\nEven Rouault\ud83d\udcac\nChristoph Gohlke\ud83d\udce6 \ud83d\udcac \ud83d\udc1b \u26a0\ufe0f\nChris Willoughby\ud83d\udcbb\n\n\nGuillaume Lostis\ud83d\udcd6\nEduard Popov\ud83d\udcd6\nJoe Ranalli\ud83d\udc1b \ud83d\udcbb \u26a0\ufe0f\nGreg Berardinelli\ud83d\udc1b \ud83d\udcbb \ud83e\udd14 \u26a0\ufe0f\nMartin Raspaud\ud83d\udc1b \ud83d\udcbb \u26a0\ufe0f \ud83e\udd14\nMike Taves\u26a0\ufe0f\nDavid Haberth\u00fcr\ud83d\udcd6\n\n\nmmodenesi\ud83d\udc1b \ud83d\udcbb \u26a0\ufe0f\njacob-indigo\ud83d\udc1b \ud83d\udcbb\nPoruri Sai Rahul\u26a0\ufe0f\nYann-Sebastien Tremblay-Johnston\ud83d\udcd6\nodidev\ud83d\udce6\nIdan Miara\ud83d\udcbb \ud83d\udcd6 \ud83d\udca1 \u26a0\ufe0f\nBrendan Jurd\ud83d\udcd6 \ud83c\udfa8\n\n\nBill Little\ud83d\udcd6\nGerrit Holl\ud83d\udcd6\nKirill Kouzoubov\ud83d\udcbb\nDan Hemberger\ud83d\udc1b \ud83d\udcbb\nMartin Fleischmann\ud83d\udc1b \ud83d\udcbb \u26a0\ufe0f\nMatthias Meulien\ud83d\udcbb \ud83d\udc1b\nIsaac Boates\ud83d\udcbb \ud83d\udc1b \u26a0\ufe0f\n\n\nKyle Penner\ud83d\udcbb \ud83d\udc1b \ud83d\udcd6\npaulcochrane\ud83d\udcbb \ud83d\udcd6 \u26a0\ufe0f \ud83d\udc1b\nAntonio Ettorre\ud83d\udce6\nDWesl\ud83d\udcbb\nV\u00edctor Molina Garc\u00eda\ud83d\udce6\nSamuel Kogler\ud83d\udc1b \ud83d\udcbb\nAlexander Shadchin\ud83d\udc1b \ud83d\udcbb\n\n\nGreg Lucas\ud83d\udcbb \ud83e\udd14\nDan Mahr\ud83d\udcbb \ud83d\udcd6 \u26a0\ufe0f\nRomain Hugonnet\ud83d\udcbb \ud83d\udcd6 \u26a0\ufe0f\nJavier Jimenez Shaw\ud83d\udcbb \ud83d\udcd6 \u26a0\ufe0f\n\n\nThis project follows the all-contributors specification. Contributions of any kind welcome!\n\n\n", "description": "Python interface to PROJ cartographic projections library."}, {"name": "pyphen", "readme": "\nPyphen is a pure Python module to hyphenate text using existing Hunspell\nhyphenation dictionaries.\nThis module is a fork of python-hyphenator, written by Wilbert Berendsen.\nMany dictionaries are included in pyphen, they come from the LibreOffice git\nrepository and are distributed under GPL, LGPL and/or MPL. Dictionaries are not\nmodified in this repository. See the dictionaries and LibreOffice\u2019s repository\nfor more details.\nhttps://cgit.freedesktop.org/libreoffice/dictionaries/tree/\n\nFree software: GPL 2.0+/LGPL 2.1+/MPL 1.1 tri-license\nFor Python 3.7+, tested on CPython and PyPy\nDocumentation: https://doc.courtbouillon.org/pyphen\nChangelog: https://github.com/Kozea/pyphen/releases\nCode, issues, tests: https://github.com/Kozea/pyphen\nCode of conduct: https://www.courtbouillon.org/code-of-conduct\nProfessional support: https://www.courtbouillon.org\nDonation: https://opencollective.com/courtbouillon\n\nPyphen has been created and developed by Kozea (https://kozea.fr).\nProfessional support, maintenance and community management is provided by\nCourtBouillon (https://www.courtbouillon.org).\nCopyrights are retained by their contributors, no copyright assignment is\nrequired to contribute to Pyphen. Unless explicitly stated otherwise, any\ncontribution intentionally submitted for inclusion is licensed under\nGPL\u202f2.0+/LGPL\u202f2.1+/MPL\u202f1.1, without any additional terms or conditions. For\nfull authorship information, see the version control history.\n", "description": "Hyphenate text using Hunspell dictionaries."}, {"name": "PyPDF2", "readme": "\n\n\n\n\n\nNOTE: The PyPDF2 project is going back to its roots. PyPDF2==3.0.X will be  the last version of PyPDF2. Development will continue with pypdf==3.1.0.\nPyPDF2\nPyPDF2 is a free and open-source pure-python PDF library capable of splitting,\nmerging,\ncropping, and transforming\nthe pages of PDF files. It can also add\ncustom data, viewing options, and\npasswords\nto PDF files. PyPDF2 can\nretrieve text\nand\nmetadata\nfrom PDFs as well.\nInstallation\nYou can install PyPDF2 via pip:\npip install PyPDF2\n\nIf you plan to use PyPDF2 for encrypting or decrypting PDFs that use AES, you\nwill need to install some extra dependencies. Encryption using RC4 is supported\nusing the regular installation.\npip install PyPDF2[crypto]\n\nUsage\nfrom PyPDF2 import PdfReader\n\nreader = PdfReader(\"example.pdf\")\nnumber_of_pages = len(reader.pages)\npage = reader.pages[0]\ntext = page.extract_text()\n\nPyPDF2 can do a lot more, e.g. splitting, merging, reading and creating\nannotations, decrypting and encrypting, and more.\nPlease see the documentation\nfor more usage examples!\nA lot of questions are asked and answered\non StackOverflow.\nContributions\nMaintaining PyPDF2 is a collaborative effort. You can support PyPDF2 by writing\ndocumentation, helping to narrow down issues, and adding code.\nQ&A\nThe experience PyPDF2 users have covers the whole range from beginners who\nwant to make their live easier to experts who developed software before PDF\nexisted. You can contribute to the PyPDF2 community by answering questions\non StackOverflow,\nhelping in discussions,\nand asking users who report issues for MCVE's (Code + example PDF!).\nIssues\nA good bug ticket includes a MCVE - a minimal complete verifiable example.\nFor PyPDF2, this means that you must upload a PDF that causes the bug to occur\nas well as the code you're executing with all of the output. Use\nprint(PyPDF2.__version__) to tell us which version you're using.\nCode\nAll code contributions are welcome, but smaller ones have a better chance to\nget included in a timely manner. Adding unit tests for new features or test\ncases for bugs you've fixed help us to ensure that the Pull Request (PR) is fine.\nPyPDF2 includes a test suite which can be executed with pytest:\n$ pytest\n===================== test session starts =====================\nplatform linux -- Python 3.6.15, pytest-7.0.1, pluggy-1.0.0\nrootdir: /home/moose/GitHub/Martin/PyPDF2\nplugins: cov-3.0.0\ncollected 233 items\n\ntests/test_basic_features.py ..                         [  0%]\ntests/test_constants.py .                               [  1%]\ntests/test_filters.py .................x.....           [ 11%]\ntests/test_generic.py ................................. [ 25%]\n.............                                           [ 30%]\ntests/test_javascript.py ..                             [ 31%]\ntests/test_merger.py .                                  [ 32%]\ntests/test_page.py .........................            [ 42%]\ntests/test_pagerange.py ................                [ 49%]\ntests/test_papersizes.py ..................             [ 57%]\ntests/test_reader.py .................................. [ 72%]\n...............                                         [ 78%]\ntests/test_utils.py ....................                [ 87%]\ntests/test_workflows.py ..........                      [ 91%]\ntests/test_writer.py .................                  [ 98%]\ntests/test_xmp.py ...                                   [100%]\n\n========== 232 passed, 1 xfailed, 1 warning in 4.52s ==========\n\n", "description": "Pure Python PDF library capable of splitting, merging and transforming PDFs."}, {"name": "pyparsing", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nPyParsing -- A Python Parsing Module\nIntroduction\nDocumentation\nLicense\nHistory\n\n\n\n\n\nREADME.rst\n\n\n\n\n\nPyParsing -- A Python Parsing Module\n \n \n   \n\n\nIntroduction\nThe pyparsing module is an alternative approach to creating and\nexecuting simple grammars, vs. the traditional lex/yacc approach, or the\nuse of regular expressions. The pyparsing module provides a library of\nclasses that client code uses to construct the grammar directly in\nPython code.\n[Since first writing this description of pyparsing in late 2003, this\ntechnique for developing parsers has become more widespread, under the\nname Parsing Expression Grammars - PEGs. See more information on PEGs\nhere\n.]\nHere is a program to parse \"Hello, World!\" (or any greeting of the form\n\"salutation, addressee!\"):\nfrom pyparsing import Word, alphas\ngreet = Word(alphas) + \",\" + Word(alphas) + \"!\"\nhello = \"Hello, World!\"\nprint(hello, \"->\", greet.parseString(hello))\nThe program outputs the following:\nHello, World! -> ['Hello', ',', 'World', '!']\n\nThe Python representation of the grammar is quite readable, owing to the\nself-explanatory class names, and the use of '+', '|' and '^' operator\ndefinitions.\nThe parsed results returned from parseString() is a collection of type\nParseResults, which can be accessed as a\nnested list, a dictionary, or an object with named attributes.\nThe pyparsing module handles some of the problems that are typically\nvexing when writing text parsers:\n\nextra or missing whitespace (the above program will also handle \"Hello,World!\", \"Hello , World !\", etc.)\nquoted strings\nembedded comments\n\nThe examples directory includes a simple SQL parser, simple CORBA IDL\nparser, a config file parser, a chemical formula parser, and a four-\nfunction algebraic notation parser, among many others.\n\nDocumentation\nThere are many examples in the online docstrings of the classes\nand methods in pyparsing. You can find them compiled into online docs. Additional\ndocumentation resources and project info are listed in the online\nGitHub wiki. An\nentire directory of examples can be found here.\n\nLicense\nMIT License. See header of the pyparsing __init__.py file.\n\nHistory\nSee CHANGES file.\n\n\n", "description": "Library for creating and executing simple grammars as an alternative to regex and lex/yacc."}, {"name": "pypandoc", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npypandoc\nInstallation\nInstalling via pip\nInstalling via conda\nInstalling pandoc\nInstalling pandoc via pypandoc\nInstalling pandoc manually\nSpecifying the location of pandoc binaries\nUsage\nDealing with Formatting Arguments\nLogging Messages\nGetting Pandoc Version\nRelated\nContributing\nContributors\nLicense\n\n\n\n\n\nREADME.md\n\n\n\n\npypandoc\n\n\n\n\n\n\n\n\n\n\n\nPypandoc provides a thin wrapper for pandoc, a universal\ndocument converter.\nInstallation\nPypandoc uses pandoc, so it needs an available installation of pandoc. Pypandoc provides 2 packages, \"pypandoc\" and \"pypandoc_binary\", with the second one including pandoc out of the box.\nThe 2 packages are identical, with the only difference being that one includes pandoc, while the other don't.\nIf pandoc is already installed (i.e. pandoc is in the PATH), pypandoc uses the version with the\nhigher version number, and if both are the same, the already installed version. See Specifying the location of pandoc binaries for more.\nTo use pandoc filters, you must have the relevant filters installed on your machine.\nInstalling via pip\nIf you want to install pandoc yourself or are on a unsupported platform, you'll need to install \"pypandoc\" and  install pandoc manually\npip install pypandoc\n\nIf you want pandoc included out of the box, you can utilize our pypandoc_binary package, which are identical to the \"pypandoc\" package, but with pandoc included.\npip install pypandoc_binary\n\nPrebuilt wheels for Windows and Mac OS X\nIf you use Linux and have your own wheelhouse,\nyou can build a wheel which include pandoc with\npython setup_binary.py download_pandoc; python setup.py bdist_wheel. Be aware that this works only\non 64bit intel systems, as we only download it from the\nofficial releases.\nInstalling via conda\nPypandoc is included in conda-forge. The conda packages will\nalso install the pandoc package, so pandoc is available in the installation.\nInstall via conda install -c conda-forge pypandoc.\nYou can also add the channel to your conda config via\nconda config --add channels conda-forge. This makes it possible to\nuse conda install pypandoc directly and also lets you update via conda update pypandoc.\nInstalling pandoc\nIf you don't already have pandoc on your system, or have installed the pypandoc_binary package, which includes pandoc, you need to install pandoc by yourself.\nInstalling pandoc via pypandoc\nInstalling via pypandoc is possible on Windows, Mac OS X or Linux (Intel-based, 64-bit):\npip install pypandoc\nfrom pypandoc.pandoc_download import download_pandoc\n# see the documentation how to customize the installation path\n# but be aware that you then need to include it in the `PATH`\ndownload_pandoc()\nThe default install location is included in the search path for pandoc, so you\ndon't need to add it to the PATH.\nBy default, the latest pandoc version is installed. If you want to specify your own version, say 1.19.1, use download_pandoc(version='1.19.1') instead.\nInstalling pandoc manually\nInstalling manually via the system mechanism is also possible. Such installation mechanism\nmake pandoc available on many more platforms:\n\nUbuntu/Debian: sudo apt-get install pandoc\nFedora/Red Hat: sudo yum install pandoc\nArch: sudo pacman -S pandoc\nMac OS X with Homebrew: brew install pandoc pandoc-citeproc Caskroom/cask/mactex\nMachine with Haskell: cabal-install pandoc\nWindows: There is an installer available\nhere\nFreeBSD with pkg: pkg install hs-pandoc\nOr see Pandoc - Installing pandoc\n\nBe aware that not all install mechanisms put pandoc in the PATH, so you either\nhave to change the PATH yourself or set the full PATH to pandoc in\nPYPANDOC_PANDOC. See the next section for more information.\nSpecifying the location of pandoc binaries\nYou can point to a specific pandoc version by setting the environment variable\nPYPANDOC_PANDOC to the full PATH to the pandoc binary\n(PYPANDOC_PANDOC=/home/x/whatever/pandoc or PYPANDOC_PANDOC=c:\\pandoc\\pandoc.exe).\nIf this environment variable is set, this is the only place where pandoc is searched for.\nIn certain cases, e.g. pandoc is installed but a web server with its own user\ncannot find the binaries, it is useful to specify the location at runtime:\nimport os\nos.environ.setdefault('PYPANDOC_PANDOC', '/home/x/whatever/pandoc')\nUsage\nThere are two basic ways to use pypandoc: with input files or with input\nstrings.\nimport pypandoc\n\n# With an input file: it will infer the input format from the filename\noutput = pypandoc.convert_file('somefile.md', 'rst')\n\n# ...but you can overwrite the format via the `format` argument:\noutput = pypandoc.convert_file('somefile.txt', 'rst', format='md')\n\n# alternatively you could just pass some string. In this case you need to\n# define the input format:\noutput = pypandoc.convert_text('# some title', 'rst', format='md')\n# output == 'some title\\r\\n==========\\r\\n\\r\\n'\nconvert_text expects this string to be unicode or utf-8 encoded bytes. convert_* will always\nreturn a unicode string.\nIt's also possible to directly let pandoc write the output to a file. This is the only way to\nconvert to some output formats (e.g. odt, docx, epub, epub3, pdf). In that case convert_*() will\nreturn an empty string.\nimport pypandoc\n\noutput = pypandoc.convert_file('somefile.md', 'docx', outputfile=\"somefile.docx\")\nassert output == \"\"\nIt's also possible to specify multiple input files to pandoc, either as absolute paths, relative paths or file patterns.\nimport pypandoc\n\n# convert all markdown files in a chapters/ subdirectory.\npypandoc.convert_file('chapters/*.md', 'docx', outputfile=\"somefile.docx\")\n\n# convert all markdown files in the book1 and book2 directories.\npypandoc.convert_file(['book1/*.md', 'book2/*.md'], 'docx', outputfile=\"somefile.docx\")\n\n# convert the front from another drive, and all markdown files in the chapter directory.\npypandoc.convert_file(['D:/book_front.md', 'book2/*.md'], 'docx', outputfile=\"somefile.docx\")\npathlib is also supported.\nimport pypandoc\nfrom pathlib import Path\n\n# single file\ninput = Path('somefile.md')\noutput = input.with_suffix('.docx')\npypandoc.convert_file(input, 'docx', outputfile=output)\n\n# convert all markdown files in a chapters/ subdirectory.\npypandoc.convert_file(Path('chapters').glob('*.md'), 'docx', outputfile=\"somefile.docx\")\n\n# convert all markdown files in the book1 and book2 directories.\npypandoc.convert_file([*Path('book1').glob('*.md'), *Path('book2').glob('*.md')], 'docx', outputfile=\"somefile.docx\")\n# pathlib globs must be unpacked if they are inside lists.\nIn addition to format, it is possible to pass extra_args.\nThat makes it possible to access various pandoc options easily.\noutput = pypandoc.convert_text(\n    '<h1>Primary Heading</h1>',\n    'md', format='html',\n    extra_args=['--atx-headers'])\n# output == '# Primary Heading\\r\\n'\noutput = pypandoc.convert_text(\n    '# Primary Heading',\n    'html', format='md',\n    extra_args=['--base-header-level=2'])\n# output == '<h2 id=\"primary-heading\">Primary Heading</h2>\\r\\n'\npypandoc now supports easy addition of\npandoc filters.\nfilters = ['pandoc-citeproc']\npdoc_args = ['--mathjax',\n             '--smart']\noutput = pypandoc.convert_file(filename,\n                               to='html5',\n                               format='md',\n                               extra_args=pdoc_args,\n                               filters=filters)\nPlease pass any filters in as a list and not as a string.\nPlease refer to pandoc -h and the\nofficial documentation for further details.\nDealing with Formatting Arguments\nPandoc supports custom formatting though -V parameter. In order to use it through\npypandoc, use code such as this:\noutput = pypandoc.convert_file('demo.md', 'pdf', outputfile='demo.pdf',\n  extra_args=['-V', 'geometry:margin=1.5cm'])\n\nNote: it's important to separate -V and its argument within a list like that or else\nit won't work. This gotcha has to do with the way\nsubprocess.Popen works.\n\nLogging Messages\nPypandoc logs messages using the Python logging library.\nBy default, it will send messages to the console, including any messages\ngenerated by Pandoc. If desired, this behaviour can be changed by adding\nhandlers to\nthe pypandoc logger before calling any functions. For example, to mute all\nlogging add a null handler:\nimport logging\nlogging.getLogger('pypandoc').addHandler(logging.NullHandler())\nGetting Pandoc Version\nAs it can be useful sometimes to check what pandoc version is available at your system or which\nparticular pandoc binary is used by pypandoc. For that, pypandoc provides the following\nutility functions. Example:\nprint(pypandoc.get_pandoc_version())\nprint(pypandoc.get_pandoc_path())\nprint(pypandoc.get_pandoc_formats())\n\nRelated\n\npydocverter is a client for a service called\nDocverter, which offers pandoc as a service (plus some extra goodies).\nSee pyandoc for an alternative implementation of a pandoc\nwrapper from Kenneth Reitz. This one hasn't been active in a while though.\nSee panflute which provides convert_text similar to pypandoc's. Its focus is on writing and running pandoc filters though.\n\nContributing\nContributions are welcome. When opening a PR, please keep the following guidelines in mind:\n\nBefore implementing, please open an issue for discussion.\nMake sure you have tests for the new logic.\nMake sure your code passes flake8 pypandoc/*.py tests.py\nAdd yourself to contributors at README.md unless you are already there. In that case tweak your contributions.\n\nNote that for citeproc tests to pass you'll need to have pandoc-citeproc installed. If you installed a prebuilt wheel or conda package, it is already included.\nContributors\n\nJessica Tegner - New maintainer as of 1. Juli 2021\nValentin Haenel - String conversion fix\nDaniel Sanchez - Automatic parsing of input/output formats\nThomas G. - Python 3 support\nBen Jao Ming - Fail gracefully if pandoc is missing\nRoss Crawford-d'Heureuse - Encode input in UTF-8 and add Django\nexample\nMichael Chow - Decode output in UTF-8\nJanusz Skonieczny - Support Windows newlines and allow encoding to\nbe specified.\ngabeos - Fix help parsing\nMarc Abramowitz - Make setup.py fail hard if pandoc is\nmissing, Travis, Dockerfile, PyPI badge, Tox, PEP-8, improved documentation\nDaniel L. - Add extra_args example to README\nAmy Guy - Exception handling for unicode errors\nFlorian E\u00dfer - Allow Markdown extensions in output format\nPhilipp Wendler - Allow Markdown extensions in input format\nJan Katins - Handling output to a file, Travis to work on newer version of pandoc, return code checking, get_pandoc_version. Helped to fix the Travis build, new convert_* API. Former maintainer of pypandoc\nAaron Gonzales - Added better filter handling\nDavid Lukes - Enabled input from non-plain-text files and made sure tests clean up template files correctly if they fail\nvalholl - Set up licensing information correctly and include examples to distribution version\nCyrille Rossant - Fixed bug by trimming out stars in the list of pandoc formats. Helped to fix the Travis build.\nPaul Osborne - Don't require pandoc to install pypandoc.\nFelix Yan - Added installation instructions for Arch Linux.\nKolen Cheung - Implement _get_pandoc_urls for installing arbitrary version as well as the latest version of pandoc. Minor: README, Travis, setup.py.\nRebecca Heineman - Added scanning code for finding pandoc in Windows\nAndrew Barraford - Download destination.\nJesse Widner & Dominic Thorn - Add support for lua filters\nAlex Kneisel - Added pathlib.Path support to convert_file.\nJuho Veps\u00e4l\u00e4inen - Creator and former maintainer of pypandoc\nConnor - Updated Dockerfile to Python 3.9 image and added docker compose file\n\nLicense\nPypandoc is available under MIT license. See LICENSE for more details. Pandoc itself is available under the GPL2 license.\n\n\n", "description": "Thin wrapper for pandoc document conversion tool with support for piping input strings."}, {"name": "pyOpenSSL", "readme": "\n\n\n\n\n\n\n\n\n\n\n\npyOpenSSL -- A Python wrapper around the OpenSSL library\nDiscussion\n\n\n\n\n\nREADME.rst\n\n\n\n\npyOpenSSL -- A Python wrapper around the OpenSSL library\n\n\n\n\nNote: The Python Cryptographic Authority strongly suggests the use of pyca/cryptography\nwhere possible. If you are using pyOpenSSL for anything other than making a TLS connection\nyou should move to cryptography and drop your pyOpenSSL dependency.\nHigh-level wrapper around a subset of the OpenSSL library. Includes\n\nSSL.Connection objects, wrapping the methods of Python's portable sockets\nCallbacks written in Python\nExtensive error-handling mechanism, mirroring OpenSSL's error codes\n\n... and much more.\nYou can find more information in the documentation.\nDevelopment takes place on GitHub.\n\nDiscussion\nIf you run into bugs, you can file them in our issue tracker.\nWe maintain a cryptography-dev mailing list for both user and development discussions.\nYou can also join #pyca on irc.libera.chat to ask questions or get involved.\n\n\n", "description": "Wrapper around a subset of the OpenSSL library including SSL connections and error handling."}, {"name": "PyNaCl", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nPyNaCl: Python binding to the libsodium library\nFeatures\nChangelog\n\n\n\n\n\nREADME.rst\n\n\n\n\nPyNaCl: Python binding to the libsodium library\n\n\n\n\n\nPyNaCl is a Python binding to libsodium, which is a fork of the\nNetworking and Cryptography library. These libraries have a stated goal of\nimproving usability, security and speed. It supports Python 3.6+ as well as\nPyPy 3.\n\nFeatures\n\nDigital signatures\nSecret-key encryption\nPublic-key encryption\nHashing and message authentication\nPassword based key derivation and password hashing\n\n\nChangelog\n\n\n", "description": "Python binding to libsodium cryptography library supporting public-key encryption, hashing and more.", "category": "Cryptography"}, {"name": "PyMuPDF", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPyMuPDF\nInstallation\nUsage\nDocumentation\nOptional Features\nAbout\nLicense and Copyright\nContact\n\n\n\n\n\nREADME.md\n\n\n\n\nPyMuPDF\nPyMuPDF is a high performance Python library for data extraction, analysis, conversion & manipulation of PDF (and other) documents.\nInstallation\nPyMuPDF requires Python 3.8 or later, install using pip with:\npip install PyMuPDF\nThere are no mandatory external dependencies. However, some optional features become available only if additional packages are installed.\nYou can also try without installing by visiting PyMuPDF.io.\nUsage\nBasic usage is as follows:\nimport fitz # imports the pymupdf library\ndoc = fitz.open(\"example.pdf\") # open a document\nfor page in doc: # iterate the document pages\n  text = page.get_text() # get plain text encoded as UTF-8\n\n\nDocumentation\nFull documentation can be found on pymupdf.readthedocs.io.\nOptional Features\n\nfontTools for creating font subsets.\npymupdf-fonts contains some nice fonts for your text output.\nTesseract-OCR for optical character recognition in images and document pages.\n\nAbout\nPyMuPDF adds Python bindings and abstractions to MuPDF, a lightweight PDF, XPS, and eBook viewer, renderer, and toolkit. Both PyMuPDF and MuPDF are maintained and developed by Artifex Software, Inc.\nPyMuPDF was originally written by Jorj X. McKie.\nLicense and Copyright\nPyMuPDF is available under open-source AGPL and commercial license agreements. If you determine you cannot meet the requirements of the AGPL, please contact Artifex for more information regarding a commercial license.\nContact\nJoin us on Discord here: #pymupdf\n\n\n", "description": "High performance Pythonic wrapper around the MuPDF PDF toolkit for parsing, analyzing and converting PDFs."}, {"name": "pymc3", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFeatures\nGetting started\nIf you already know about Bayesian statistics:\nLearn Bayesian statistics with a book together with PyMC\nAudio & Video\nInstallation\nCiting PyMC\nContact\nLicense\nSoftware using PyMC\nGeneral purpose\nDomain specific\nPapers citing PyMC\nContributors\nSupport\nProfessional Consulting Support\nSponsors\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\n \n  \n \n \n\nPyMC (formerly PyMC3) is a Python package for Bayesian statistical modeling\nfocusing on advanced Markov chain Monte Carlo (MCMC) and variational inference (VI)\nalgorithms. Its flexibility and extensibility make it applicable to a\nlarge suite of problems.\nCheck out the PyMC overview,  or\none of the many examples!\nFor questions on PyMC, head on over to our PyMC Discourse forum.\n\nFeatures\n\nIntuitive model specification syntax, for example, x ~ N(0,1)\ntranslates to x = Normal('x',0,1)\nPowerful sampling algorithms, such as the No U-Turn\nSampler, allow complex models\nwith thousands of parameters with little specialized knowledge of\nfitting algorithms.\nVariational inference: ADVI\nfor fast approximate posterior estimation as well as mini-batch ADVI\nfor large data sets.\n\nRelies on PyTensor which provides:\n\nComputation optimization and dynamic C or JAX compilation\nNumPy broadcasting and advanced indexing\nLinear algebra operators\nSimple extensibility\n\n\n\n\nTransparent support for missing value imputation\n\n\nGetting started\n\nIf you already know about Bayesian statistics:\n\nAPI quickstart guide\nThe PyMC tutorial\nPyMC examples and the API reference\n\n\nLearn Bayesian statistics with a book together with PyMC\n\nProbabilistic Programming and Bayesian Methods for Hackers: Fantastic book with many applied code examples.\nPyMC port of the book \"Doing Bayesian Data Analysis\" by John Kruschke as well as the second edition: Principled introduction to Bayesian data analysis.\nPyMC port of the book \"Statistical Rethinking A Bayesian Course with Examples in R and Stan\" by Richard McElreath\nPyMC port of the book \"Bayesian Cognitive Modeling\" by Michael Lee and EJ Wagenmakers: Focused on using Bayesian statistics in cognitive modeling.\nBayesian Analysis with Python (second edition) by Osvaldo Martin: Great introductory book. (code and errata).\n\n\nAudio & Video\n\nHere is a YouTube playlist gathering several talks on PyMC.\nYou can also find all the talks given at PyMCon 2020 here.\nThe \"Learning Bayesian Statistics\" podcast helps you discover and stay up-to-date with the vast Bayesian community. Bonus: it's hosted by Alex Andorra, one of the PyMC core devs!\n\n\nInstallation\nTo install PyMC on your system, follow the instructions on the installation guide.\n\nCiting PyMC\nPlease choose from the following:\n\n PyMC: A Modern and Comprehensive Probabilistic Programming Framework in Python, Abril-Pla O, Andreani V, Carroll C, Dong L, Fonnesbeck CJ, Kochurov M, Kumar R, Lao J, Luhmann CC, Martin OA, Osthege M, Vieira R, Wiecki T, Zinkov R. (2023)\n\n A DOI for all versions.\nDOIs for specific versions are shown on Zenodo and under Releases\n\n\nContact\nWe are using discourse.pymc.io as our main communication channel.\nTo ask a question regarding modeling or usage of PyMC we encourage posting to our Discourse forum under the \u201cQuestions\u201d Category. You can also suggest feature in the \u201cDevelopment\u201d Category.\nYou can also follow us on these social media platforms for updates and other announcements:\n\nLinkedIn @pymc\nYouTube @PyMCDevelopers\nTwitter @pymc_devs\nMastodon @pymc@bayes.club\n\nTo report an issue with PyMC please use the issue tracker.\nFinally, if you need to get in touch for non-technical information about the project, send us an e-mail.\n\nLicense\nApache License, Version\n2.0\n\nSoftware using PyMC\n\nGeneral purpose\n\nBambi: BAyesian Model-Building Interface (BAMBI) in Python.\ncalibr8: A toolbox for constructing detailed observation models to be used as likelihoods in PyMC.\ngumbi: A high-level interface for building GP models.\nSunODE: Fast ODE solver, much faster than the one that comes with PyMC.\npymc-learn: Custom PyMC models built on top of pymc3_models/scikit-learn API\n\n\nDomain specific\n\nExoplanet: a toolkit for modeling of transit and/or radial velocity observations of exoplanets and other astronomical time series.\nbeat: Bayesian Earthquake Analysis Tool.\nCausalPy: A package focussing on causal inference in quasi-experimental settings.\n\nPlease contact us if your software is not listed here.\n\nPapers citing PyMC\nSee Google Scholar for a continuously updated list.\n\nContributors\nSee the GitHub contributor\npage. Also read our Code of Conduct guidelines for a better contributing experience.\n\nSupport\nPyMC is a non-profit project under NumFOCUS umbrella. If you want to support PyMC financially, you can donate here.\n\nProfessional Consulting Support\nYou can get professional consulting support from PyMC Labs.\n\nSponsors\n\n\n\n\n\n\n", "description": "Bayesian statistical modeling focused on Markov chain Monte Carlo and variational inference algorithms."}, {"name": "pyluach", "readme": "\n\n\n\n\n\n\n\n\n\n\n\npyluach\nFeatures\nInstallation\nDocumentation\nExamples\nContact\nLicense\n\n\n\n\n\nREADME.rst\n\n\n\n\npyluach\n\n\n\nPyluach is a Python package for dealing with Hebrew (Jewish) calendar dates.\n\nFeatures\n\nConversion between Hebrew and Gregorian dates\nFinding the difference between two dates\nFinding a date at a given duration from the given date\nRich comparisons between dates\nFinding the weekday of a given date\nFinding the weekly Parsha reading of a given date\nGetting the holiday occuring on a given date\nGenerating html and text Hebrew calendars\n\n\nInstallation\nUse pip install pyluach.\n\nDocumentation\nDocumentation for pyluach can be found at https://readthedocs.org/projects/pyluach/.\n\nExamples\n>>> from pyluach import dates, hebrewcal, parshios\n\n>>> today = dates.HebrewDate.today()\n>>> lastweek_gregorian = (today - 7).to_greg()\n>>> lastweek_gregorian < today\n    True\n>>> today - lastweek_gregorian\n7\n>>> greg = dates.GregorianDate(1986, 3, 21)\n>>> heb = dates.HebrewDate(5746, 13, 10)\n>>> greg == heb\nTrue\n\n>>> purim = dates.HebrewDate(5781, 12, 14)\n>>> purim.hebrew_day()\n'\u05d9\u05f4\u05d3'\n>>> purim.hebrew_date_string()\n'\u05d9\u05f4\u05d3 \u05d0\u05d3\u05e8 \u05ea\u05e9\u05e4\u05f4\u05d0'\n>>> purim.hebrew_date_string(True)\n'\u05d9\u05f4\u05d3 \u05d0\u05d3\u05e8 \u05d4\u05f3\u05ea\u05e9\u05e4\u05f4\u05d0'\n\n>>> rosh_hashana = dates.HebrewDate(5782, 7, 1)\n>>> rosh_hashana.holiday()\n'Rosh Hashana'\n>>> rosh_hashana.holiday(hebrew=True)\n'\u05e8\u05d0\u05e9 \u05d4\u05e9\u05e0\u05d4'\n>>> (rosh_hashana + 3).holiday()\nNone\n\n>>> month = hebrewcal.Month(5781, 10)\n>>> month.month_name()\n'Teves'\n>>> month.month_name(True)\n'\u05d8\u05d1\u05ea'\n>>> month + 3\nMonth(5781, 1)\n>>> for month in hebrewcal.Year(5774).itermonths():\n...     print(month.month_name())\nTishrei Cheshvan ...\n\n>>> date = dates.GregorianDate(2010, 10, 6)\n>>> parshios.getparsha(date)\n[0]\n>>> parshios.getparsha_string(date, israel=True)\n'Beraishis'\n>>> parshios.getparsha_string(date, hebrew=True)\n'\u05d1\u05e8\u05d0\u05e9\u05d9\u05ea'\n>>> new_date = dates.GregorianDate(2021, 3, 10)\n>>> parshios.getparsha_string(new_date)\n'Vayakhel, Pekudei'\n>>> parshios.getparsha_string(new_date, hebrew=True)\n'\u05d5\u05d9\u05e7\u05d4\u05dc, \u05e4\u05e7\u05d5\u05d3\u05d9'\n\n\nContact\nFor questions and comments please raise an issue in github or contact me at\nsimlist@gmail.com.\n\nLicense\nPyluach is licensed under the MIT license.\n\n\n", "description": "Package for working with Hebrew calendar dates and holidays."}, {"name": "pylog", "readme": "\n\n\n\nREADME.md\n\n\n\n\nAbstract (from WI 2020 paper)\nWe examine the history of Artificial Intelligence, from its audacious beginnings to the current day. We argue that constraint programming (a) is the rightful heir and modern-day descendent of that early work and (b) offers a more stable and reliable platform for AI than deep machine learning.\nWe offer a tutorial on constraint programming solvers that should be accessible to most software developers. We show how constraint programming works, how to implement constraint programming in Python, and how to integrate a Python constraint-programming solver with other Python code.\nTo install from PyPi: `pip install pylog``\nIf you are editing pylog source, running tests, or building installable packages for upload to PyPi, install as an editable install with test and buid prequisites:\npip install -e .[test,build] from the project root directory.  If you don't want to run tests or build an installable package,\npip install -e .\nTo build the installable package for upload:\npy -m build from the project root.\nTo run the tests, run them from the project root:\npytest\n\n\n", "description": "Implementation of logic programming based on miniKanren."}, {"name": "PyJWT", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nPyJWT\nSponsor\nInstalling\nUsage\nDocumentation\nTests\n\n\n\n\n\nREADME.rst\n\n\n\n\nPyJWT\n\n\n\n\n\n\n\nA Python implementation of RFC 7519. Original implementation was written by @progrium.\n\nSponsor\n\n\n\nIf you want to quickly add secure token-based authentication to Python projects, feel free to check Auth0's Python SDK and free plan at auth0.com/developers.\n\n\n\n\nInstalling\nInstall with pip:\n$ pip install PyJWT\n\nUsage\n>>> import jwt\n>>> encoded = jwt.encode({\"some\": \"payload\"}, \"secret\", algorithm=\"HS256\")\n>>> print(encoded)\neyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb21lIjoicGF5bG9hZCJ9.4twFt5NiznN84AWoo1d7KO1T_yoc0Z6XOpOVswacPZg\n>>> jwt.decode(encoded, \"secret\", algorithms=[\"HS256\"])\n{'some': 'payload'}\n\nDocumentation\nView the full docs online at https://pyjwt.readthedocs.io/en/stable/\n\nTests\nYou can run tests from the project root after cloning with:\n$ tox\n\n\n", "description": "Implementation of JSON Web Tokens for encoding and decoding compact claims between parties."}, {"name": "pygraphviz", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nPyGraphviz\nSimple example\nInstall\nLicense\n\n\n\n\n\nREADME.rst\n\n\n\n\nPyGraphviz\n\n\n\nPyGraphviz is a Python interface to the Graphviz graph layout and\nvisualization package.\nWith PyGraphviz you can create, edit, read, write, and draw graphs using\nPython to access the Graphviz graph data structure and layout algorithms.\nPyGraphviz provides a similar programming interface to NetworkX\n(https://networkx.org).\n\nWebsite (including documentation): https://pygraphviz.github.io\nMailing list: https://groups.google.com/forum/#!forum/pygraphviz-discuss\nSource: https://github.com/pygraphviz/pygraphviz\nBug reports: https://github.com/pygraphviz/pygraphviz/issues\n\n\nSimple example\n>>> import pygraphviz as pgv\n>>> G = pgv.AGraph()\n>>> G.add_node(\"a\")\n>>> G.add_edge(\"b\", \"c\")\n>>> print(G)\nstrict graph \"\" {\n        a;\n        b -- c;\n}\n\nInstall\nPyGraphviz requires Graphviz.\nPlease see INSTALL.txt for details.\n\nLicense\nReleased under the 3-Clause BSD license (see LICENSE):\nCopyright (C) 2006-2022 PyGraphviz Developers\nAric Hagberg <aric.hagberg@gmail.gov>\nDan Schult <dschult@colgate.edu>\nManos Renieris\n\n\n\n", "description": "Python interface to the Graphviz graph visualization package."}, {"name": "Pygments", "readme": "\nPygments is a syntax highlighting package written in Python.\nIt is a generic syntax highlighter suitable for use in code hosting, forums,\nwikis or other applications that need to prettify source code.  Highlights\nare:\n\na wide range of over 500 languages and other text formats is supported\nspecial attention is paid to details, increasing quality by a fair amount\nsupport for new languages and formats are added easily\na number of output formats, presently HTML, LaTeX, RTF, SVG, all image\nformats that PIL supports and ANSI sequences\nit is usable as a command-line tool and as a library\n\nCopyright 2006-2023 by the Pygments team, see AUTHORS.\nLicensed under the BSD, see LICENSE for details.\n", "description": "Generic syntax highlighter with over 500 language lexers and multiple output formats."}, {"name": "pydyf", "readme": "\npydyf is a low-level PDF generator written in Python and based on PDF\nspecification 1.7.\n\nFree software: BSD license\nFor Python 3.7+, tested on CPython and PyPy\nDocumentation: https://doc.courtbouillon.org/pydyf\nChangelog: https://github.com/CourtBouillon/pydyf/releases\nCode, issues, tests: https://github.com/CourtBouillon/pydyf\nCode of conduct: https://www.courtbouillon.org/code-of-conduct\nProfessional support: https://www.courtbouillon.org\nDonation: https://opencollective.com/courtbouillon\n\nCopyrights are retained by their contributors, no copyright assignment is\nrequired to contribute to pydyf. Unless explicitly stated otherwise, any\ncontribution intentionally submitted for inclusion is licensed under the BSD\n3-clause license, without any additional terms or conditions. For full\nauthorship information, see the version control history.\n", "description": "Low-level PDF generator written in Python."}, {"name": "pydub", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n", "description": "Manipulate audio with a simple, easy to use interface."}, {"name": "pydot", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAbout\nExamples\nInput\nEdit\nOutput\nMore help\nInstallation\nDependencies\nLicense\nContacts\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\n\nAbout\npydot:\n\nis an interface to Graphviz\ncan parse and dump into the DOT language used by GraphViz,\nis written in pure Python,\n\nand networkx can convert its graphs to pydot.\nDevelopment occurs at GitHub, where you can report issues and\ncontribute code.\nExamples\nThe examples here will show you the most common input, editing and\noutput methods.\nInput\nNo matter what you want to do with pydot, it will need some input to\nstart with. Here are 3 common options:\n\n\nImport a graph from an existing DOT-file.\nUse this method if you already have a DOT-file describing a graph,\nfor example as output of another program. Let's say you already\nhave this example.dot (based on an example from Wikipedia):\ngraph my_graph {\n   bgcolor=\"yellow\";\n   a [label=\"Foo\"];\n   b [shape=circle];\n   a -- b -- c [color=blue];\n}\nJust read the graph from the DOT-file:\nimport pydot\n\ngraphs = pydot.graph_from_dot_file(\"example.dot\")\ngraph = graphs[0]\n\n\nor: Parse a graph from an existing DOT-string.\nUse this method if you already have a DOT-string describing a\ngraph in a Python variable:\nimport pydot\n\ndot_string = \"\"\"graph my_graph {\n    bgcolor=\"yellow\";\n    a [label=\"Foo\"];\n    b [shape=circle];\n    a -- b -- c [color=blue];\n}\"\"\"\n\ngraphs = pydot.graph_from_dot_data(dot_string)\ngraph = graphs[0]\n\n\nor: Create a graph from scratch using pydot objects.\nNow this is where the cool stuff starts. Use this method if you\nwant to build new graphs from Python.\nimport pydot\n\ngraph = pydot.Dot(\"my_graph\", graph_type=\"graph\", bgcolor=\"yellow\")\n\n# Add nodes\nmy_node = pydot.Node(\"a\", label=\"Foo\")\ngraph.add_node(my_node)\n# Or, without using an intermediate variable:\ngraph.add_node(pydot.Node(\"b\", shape=\"circle\"))\n\n# Add edges\nmy_edge = pydot.Edge(\"a\", \"b\", color=\"blue\")\ngraph.add_edge(my_edge)\n# Or, without using an intermediate variable:\ngraph.add_edge(pydot.Edge(\"b\", \"c\", color=\"blue\"))\nImagine using these basic building blocks from your Python program\nto dynamically generate a graph. For example, start out with a\nbasic pydot.Dot graph object, then loop through your data while\nadding nodes and edges. Use values from your data as labels, to\ndetermine shapes, edges and so forth. This way, you can easily\nbuild visualizations of thousands of interconnected items.\n\n\nor: Convert a NetworkX graph to a pydot graph.\nNetworkX has conversion methods for pydot graphs:\nimport networkx\nimport pydot\n\n# See NetworkX documentation on how to build a NetworkX graph.\n\ngraph = networkx.drawing.nx_pydot.to_pydot(my_networkx_graph)\n\n\nEdit\nYou can now further manipulate your graph using pydot methods:\n\n\nAdd further nodes and edges:\ngraph.add_edge(pydot.Edge(\"b\", \"d\", style=\"dotted\"))\n\n\nEdit attributes of graph, nodes and edges:\ngraph.set_bgcolor(\"lightyellow\")\ngraph.get_node(\"b\")[0].set_shape(\"box\")\n\n\nOutput\nHere are 3 different output options:\n\n\nGenerate an image.\nTo generate an image of the graph, use one of the create_*() or\nwrite_*() methods.\n\n\nIf you need to further process the output in Python, the\ncreate_* methods will get you a Python bytes object:\noutput_graphviz_svg = graph.create_svg()\n\n\nIf instead you just want to save the image to a file, use one of\nthe write_* methods:\ngraph.write_png(\"output.png\")\n\n\n\n\nRetrieve the DOT string.\nThere are two different DOT strings you can retrieve:\n\n\nThe \"raw\" pydot DOT: This is generated the fastest and will\nusually still look quite similar to the DOT you put in. It is\ngenerated by pydot itself, without calling Graphviz.\n# As a string:\noutput_raw_dot = graph.to_string()\n# Or, save it as a DOT-file:\ngraph.write_raw(\"output_raw.dot\")\n\n\nThe Graphviz DOT: You can use it to check how Graphviz lays out\nthe graph before it produces an image. It is generated by\nGraphviz.\n# As a bytes literal:\noutput_graphviz_dot = graph.create_dot()\n# Or, save it as a DOT-file:\ngraph.write_dot(\"output_graphviz.dot\")\n\n\n\n\nConvert to a NetworkX graph.\nHere as well, NetworkX has a conversion method for pydot graphs:\nmy_networkx_graph = networkx.drawing.nx_pydot.from_pydot(graph)\n\n\nMore help\nFor more help, see the docstrings of the various pydot objects and\nmethods. For example, help(pydot), help(pydot.Graph) and\nhelp(pydot.Dot.write).\nMore documentation contributions welcome.\nInstallation\nFrom PyPI using pip:\npip install pydot\nFrom source:\npython setup.py install\nDependencies\n\n\npyparsing: used only for loading DOT files,\ninstalled automatically during pydot installation.\n\n\nGraphViz: used to render graphs as PDF, PNG, SVG, etc.\nShould be installed separately, using your system's\npackage manager, something similar (e.g., MacPorts),\nor from its source.\n\n\nLicense\nDistributed under an MIT license.\nContacts\nMaintainers:\n\nSebastian Kalinowski sebastian@kalinowski.eu (GitHub: @prmtl)\nPeter Nowee peter@peternowee.com (GitHub: @peternowee)\n\nOriginal author: Ero Carrera ero.carrera@gmail.com\n\n\n", "description": "Python interface to Graphviz for graph manipulation and visualization."}, {"name": "pydantic", "readme": "\nPydantic\n\n\n\n\n\n\n\n\nData validation using Python type hints.\nFast and extensible, Pydantic plays nicely with your linters/IDE/brain.\nDefine how data should be in pure, canonical Python 3.7+; validate it with Pydantic.\nPydantic Company :rocket:\nWe've started a company based on the principles that I believe have led to Pydantic's success.\nLearning more from the Company Announcement.\nPydantic V1.10 vs. V2\nPydantic V2 is a ground-up rewrite that offers many new features, performance improvements, and some breaking changes compared to Pydantic V1.\nIf you're using Pydantic V1 you may want to look at the\npydantic V1.10 Documentation or,\n1.10.X-fixes git branch. Pydantic V2 also ships with the latest version of Pydantic V1 built in so that you can incrementally upgrade your code base and projects: from pydantic import v1 as pydantic_v1.\nHelp\nSee documentation for more details.\nInstallation\nInstall using pip install -U pydantic or conda install pydantic -c conda-forge.\nFor more installation options to make Pydantic even faster,\nsee the Install section in the documentation.\nA Simple Example\nfrom datetime import datetime\nfrom typing import List, Optional\nfrom pydantic import BaseModel\n\nclass User(BaseModel):\n    id: int\n    name: str = 'John Doe'\n    signup_ts: Optional[datetime] = None\n    friends: List[int] = []\n\nexternal_data = {'id': '123', 'signup_ts': '2017-06-01 12:22', 'friends': [1, '2', b'3']}\nuser = User(**external_data)\nprint(user)\n#> User id=123 name='John Doe' signup_ts=datetime.datetime(2017, 6, 1, 12, 22) friends=[1, 2, 3]\nprint(user.id)\n#> 123\n\nContributing\nFor guidance on setting up a development environment and how to make a\ncontribution to Pydantic, see\nContributing to Pydantic.\nReporting a Security Vulnerability\nSee our security policy.\nChangelog\nv2.3.0 (2023-08-23)\nGitHub release\n\n\ud83d\udd25 Remove orphaned changes file from repo by @lig in https://github.com/pydantic/pydantic/pull/7168\nAdd copy button on documentation by @Kludex in https://github.com/pydantic/pydantic/pull/7190\nFix docs on JSON type by @Kludex in https://github.com/pydantic/pydantic/pull/7189\nUpdate mypy 1.5.0 to 1.5.1 in CI by @hramezani in https://github.com/pydantic/pydantic/pull/7191\nfix download links badge by @samuelcolvin in https://github.com/pydantic/pydantic/pull/7200\nadd 2.2.1 to changelog by @samuelcolvin in https://github.com/pydantic/pydantic/pull/7212\nMake ModelWrapValidator protocols generic by @dmontagu in https://github.com/pydantic/pydantic/pull/7154\nCorrect Field(..., exclude: bool) docs by @samuelcolvin in https://github.com/pydantic/pydantic/pull/7214\nMake shadowing attributes a warning instead of an error by @adriangb in https://github.com/pydantic/pydantic/pull/7193\nDocument Base64Str and Base64Bytes by @Kludex in https://github.com/pydantic/pydantic/pull/7192\nFix config.defer_build for serialization first cases by @samuelcolvin in https://github.com/pydantic/pydantic/pull/7024\nclean Model docstrings in JSON Schema by @samuelcolvin in https://github.com/pydantic/pydantic/pull/7210\nfix #7228 (typo): docs in validators.md to correct validate_default kwarg by @lmmx in https://github.com/pydantic/pydantic/pull/7229\n\u2705 Implement tzinfo.fromutc method for TzInfo in pydantic-core by @lig in https://github.com/pydantic/pydantic/pull/7019\nSupport __get_validators__ by @hramezani in https://github.com/pydantic/pydantic/pull/7197\n\nv2.2.1 (2023-08-18)\nGitHub release\n\nMake xfailing test for root model extra stop xfailing by @dmontagu in #6937\nOptimize recursion detection by stopping on the second visit for the same object by @mciucu in #7160\nfix link in docs by @tlambert03 in #7166\nReplace MiMalloc w/ default allocator by @adriangb in pydantic/pydantic-core#900\nBump pydantic-core to 2.6.1 and prepare 2.2.1 release by @adriangb in #7176\n\nv2.2.0 (2023-08-17)\nGitHub release\n\nSplit \"pipx install\" setup command into two commands on the documentation site by @nomadmtb in #6869\nDeprecate Field.include by @hramezani in #6852\nFix typo in default factory error msg by @hramezani in #6880\nSimplify handling of typing.Annotated in GenerateSchema by @dmontagu in #6887\nRe-enable fastapi tests in CI by @dmontagu in #6883\nMake it harder to hit collisions with json schema defrefs by @dmontagu in #6566\nCleaner error for invalid input to Path fields by @samuelcolvin in #6903\n:memo: support Coordinate Type by @yezz123 in #6906\nFix ForwardRef wrapper for py 3.10.0 (shim until bpo-45166) by @randomir in #6919\nFix misbehavior related to copying of RootModel by @dmontagu in #6918\nFix issue with recursion error caused by ParamSpec by @dmontagu in #6923\nAdd section about Constrained classes to the Migration Guide by @Kludex in #6924\nUse main branch for badge links by @Viicos in #6925\nAdd test for v1/v2 Annotated discrepancy by @carlbordum in #6926\nMake the v1 mypy plugin work with both v1 and v2 by @dmontagu in #6921\nFix issue where generic models couldn't be parametrized with BaseModel by @dmontagu in #6933\nRemove xfail for discriminated union with alias by @dmontagu in #6938\nadd field_serializer to computed_field by @andresliszt in #6965\nUse union_schema with Type[Union[...]] by @JeanArhancet in #6952\nFix inherited typeddict attributes / config by @adriangb in #6981\nfix dataclass annotated before validator called twice by @davidhewitt in #6998\nUpdate test-fastapi deselected tests by @hramezani in #7014\nFix validator doc format by @hramezani in #7015\nFix typo in docstring of model_json_schema by @AdamVinch-Federated in #7032\nremove unused \"type ignores\" with pyright by @samuelcolvin in #7026\nAdd benchmark representing FastAPI startup time by @adriangb in #7030\nFix json_encoders for Enum subclasses by @adriangb in #7029\nUpdate docstring of ser_json_bytes regarding base64 encoding by @Viicos in #7052\nAllow @validate_call to work on async methods by @adriangb in #7046\nFix: mypy error with Settings and SettingsConfigDict by @JeanArhancet in #7002\nFix some typos (repeated words and it's/its) by @eumiro in #7063\nFix the typo in docstring by @harunyasar in #7062\nDocs: Fix broken URL in the pydantic-settings package recommendation by @swetjen in #6995\nHandle constraints being applied to schemas that don't accept it by @adriangb in #6951\nReplace almost_equal_floats with math.isclose by @eumiro in #7082\nbump pydantic-core to 2.5.0 by @davidhewitt in #7077\nAdd short_version and use it in links by @hramezani in #7115\n\ud83d\udcdd Add usage link to RootModel by @Kludex in #7113\nRevert \"Fix default port for mongosrv DSNs (#6827)\" by @Kludex in #7116\nClarify validate_default and _Unset handling in usage docs and migration guide by @benbenbang in #6950\nTweak documentation of Field.exclude by @Viicos in #7086\nDo not require validate_assignment to use Field.frozen by @Viicos in #7103\ntweaks to _core_utils by @samuelcolvin in #7040\nMake DefaultDict working with set by @hramezani in #7126\nDon't always require typing.Generic as a base for partially parametrized models by @dmontagu in #7119\nFix issue with JSON schema incorrectly using parent class core schema by @dmontagu in #7020\nFix xfailed test related to TypedDict and alias_generator by @dmontagu in #6940\nImprove error message for NameEmail by @dmontagu in #6939\nFix generic computed fields by @dmontagu in #6988\nReflect namedtuple default values during validation by @dmontagu in #7144\nUpdate dependencies, fix pydantic-core usage, fix CI issues by @dmontagu in #7150\nAdd mypy 1.5.0 by @hramezani in #7118\nHandle non-json native enum values by @adriangb in #7056\ndocument round_trip in Json type documentation  by @jc-louis in #7137\nRelax signature checks to better support builtins and C extension functions as validators by @adriangb in #7101\nadd union_mode='left_to_right' by @davidhewitt in #7151\nInclude an error message hint for inherited ordering by @yvalencia91 in #7124\nFix one docs link and resolve some warnings for two others by @dmontagu in #7153\nInclude Field extra keys name in warning by @hramezani in #7136\n\nv2.1.1 (2023-07-25)\nGitHub release\n\nSkip FieldInfo merging when unnecessary by @dmontagu in #6862\n\nv2.1.0 (2023-07-25)\nGitHub release\n\nAdd StringConstraints for use as Annotated metadata by @adriangb in #6605\nTry to fix intermittently failing CI by @adriangb in #6683\nRemove redundant example of optional vs default. by @ehiggs-deliverect in #6676\nDocs update by @samuelcolvin in #6692\nRemove the Validate always section in validator docs by @adriangb in #6679\nFix recursion error in json schema generation by @adriangb in #6720\nFix incorrect subclass check for secretstr by @AlexVndnblcke in #6730\nupdate pdm / pdm lockfile to 2.8.0 by @davidhewitt in #6714\nunpin pdm on more CI jobs by @davidhewitt in #6755\nimprove source locations for auxiliary packages in docs by @davidhewitt in #6749\nAssume builtins don't accept an info argument by @adriangb in #6754\nFix bug where calling help(BaseModelSubclass) raises errors by @hramezani in #6758\nFix mypy plugin handling of @model_validator(mode=\"after\") by @ljodal in #6753\nupdate pydantic-core to 2.3.1 by @davidhewitt in #6756\nMypy plugin for settings by @hramezani in #6760\nUse contentSchema keyword for JSON schema by @dmontagu in #6715\nfast-path checking finite decimals by @davidhewitt in #6769\nDocs update by @samuelcolvin in #6771\nImprove json schema doc by @hramezani in #6772\nUpdate validator docs by @adriangb in #6695\nFix typehint for wrap validator by @dmontagu in #6788\n\ud83d\udc1b Fix validation warning for unions of Literal and other type by @lig in #6628\nUpdate documentation for generics support in V2 by @tpdorsey in #6685\nadd pydantic-core build info to version_info() by @samuelcolvin in #6785\nFix pydantic dataclasses that use slots with default values by @dmontagu in #6796\nFix inheritance of hash function for frozen models by @dmontagu in #6789\n\u2728 Add SkipJsonSchema annotation by @Kludex in #6653\nError if an invalid field name is used with Field by @dmontagu in #6797\nAdd GenericModel to MOVED_IN_V2 by @adriangb in #6776\nRemove unused code from docs/usage/types/custom.md by @hramezani in #6803\nFix float -> Decimal coercion precision loss by @adriangb in #6810\nremove email validation from the north star benchmark by @davidhewitt in #6816\nFix link to mypy by @progsmile in #6824\nImprove initialization hooks example by @hramezani in #6822\nFix default port for mongosrv DSNs by @dmontagu in #6827\nImprove API documentation, in particular more links between usage and API docs by @samuelcolvin in #6780\nupdate pydantic-core to 2.4.0 by @davidhewitt in #6831\nFix annotated_types.MaxLen validator for custom sequence types by @ImogenBits in #6809\nUpdate V1 by @hramezani in #6833\nMake it so callable JSON schema extra works by @dmontagu in #6798\nFix serialization issue with InstanceOf by @dmontagu in #6829\nAdd back support for json_encoders by @adriangb in #6811\nUpdate field annotations when building the schema by @dmontagu in #6838\nUse WeakValueDictionary to fix generic memory leak by @dmontagu in #6681\nAdd config.defer_build to optionally make model building lazy by @samuelcolvin in #6823\ndelegate UUID serialization to pydantic-core by @davidhewitt in #6850\nUpdate json_encoders docs by @adriangb in #6848\nFix error message for staticmethod/classmethod order with validate_call by @dmontagu in #6686\nImprove documentation for Config by @samuelcolvin in #6847\nUpdate serialization doc to mention Field.exclude takes priority over call-time include/exclude by @hramezani in #6851\nAllow customizing core schema generation by making GenerateSchema public by @adriangb in #6737\n\nv2.0.3 (2023-07-05)\nGitHub release\n\nMention PyObject (v1) moving to ImportString (v2) in migration doc by @slafs in #6456\nFix release-tweet CI by @Kludex in #6461\nRevise the section on required / optional / nullable fields. by @ybressler in #6468\nWarn if a type hint is not in fact a type by @adriangb in #6479\nReplace TransformSchema with GetPydanticSchema by @dmontagu in #6484\nFix the un-hashability of various annotation types, for use in caching generic containers by @dmontagu in #6480\nPYD-164: Rework custom types docs by @adriangb in #6490\nFix ci by @adriangb in #6507\nFix forward ref in generic by @adriangb in #6511\nFix generation of serialization JSON schemas for core_schema.ChainSchema by @dmontagu in #6515\nDocument the change in Field.alias behavior in Pydantic V2 by @hramezani in #6508\nGive better error message attempting to compute the json schema of a model with undefined fields by @dmontagu in #6519\nDocument alias_priority by @tpdorsey in #6520\nAdd redirect for types documentation by @tpdorsey in #6513\nAllow updating docs without release by @samuelcolvin in #6551\nEnsure docs tests always run in the right folder by @dmontagu in #6487\nDefer evaluation of return type hints for serializer functions by @dmontagu in #6516\nDisable E501 from Ruff and rely on just Black by @adriangb in #6552\nUpdate JSON Schema documentation for V2 by @tpdorsey in #6492\nAdd documentation of cyclic reference handling by @dmontagu in #6493\nRemove the need for change files by @samuelcolvin in #6556\nadd \"north star\" benchmark by @davidhewitt in #6547\nUpdate Dataclasses docs by @tpdorsey in #6470\n\u267b\ufe0f Use different error message on v1 redirects by @Kludex in #6595\n\u2b06 Upgrade pydantic-core to v2.2.0 by @lig in #6589\nFix serialization for IPvAny by @dmontagu in #6572\nImprove CI by using PDM instead of pip to install typing-extensions by @adriangb in #6602\nAdd enum error type docs  by @lig in #6603\n\ud83d\udc1b Fix max_length for unicode strings by @lig in #6559\nAdd documentation for accessing features via pydantic.v1 by @tpdorsey in #6604\nInclude extra when iterating over a model by @adriangb in #6562\nFix typing of model_validator by @adriangb in #6514\nTouch up Decimal validator by @adriangb in #6327\nFix various docstrings using fixed pytest-examples by @dmontagu in #6607\nHandle function validators in a discriminated union by @dmontagu in #6570\nReview json_schema.md by @tpdorsey in #6608\nMake validate_call work on basemodel methods by @dmontagu in #6569\nadd test for big int json serde by @davidhewitt in #6614\nFix pydantic dataclass problem with dataclasses.field default_factory by @hramezani in #6616\nFixed mypy type inference for TypeAdapter by @zakstucke in #6617\nMake it work to use None as a generic parameter by @dmontagu in #6609\nMake it work to use $ref as an alias by @dmontagu in #6568\nadd note to migration guide about changes to AnyUrl etc by @davidhewitt in #6618\n\ud83d\udc1b Support defining json_schema_extra on RootModel using Field by @lig in #6622\nUpdate pre-commit to prevent commits to main branch on accident by @dmontagu in #6636\nFix PDM CI for python 3.7 on MacOS/windows by @dmontagu in #6627\nProduce more accurate signatures for pydantic dataclasses by @dmontagu in #6633\nUpdates to Url types for Pydantic V2 by @tpdorsey in #6638\nFix list markdown in transform docstring by @StefanBRas in #6649\nsimplify slots_dataclass construction to appease mypy by @davidhewitt in #6639\nUpdate TypedDict schema generation docstring by @adriangb in #6651\nDetect and lint-error for prints by @dmontagu in #6655\nAdd xfailing test for pydantic-core PR 766 by @dmontagu in #6641\nIgnore unrecognized fields from dataclasses metadata by @dmontagu in #6634\nMake non-existent class getattr a mypy error by @dmontagu in #6658\nUpdate pydantic-core to 2.3.0 by @hramezani in #6648\nUse OrderedDict from typing_extensions by @dmontagu in #6664\nFix typehint for JSON schema extra callable by @dmontagu in #6659\n\nv2.0.2 (2023-07-05)\nGitHub release\n\nFix bug where round-trip pickling/unpickling a RootModel would change the value of __dict__, #6457 by @dmontagu\nAllow single-item discriminated unions, #6405 by @dmontagu\nFix issue with union parsing of enums, #6440 by @dmontagu\nDocs: Fixed constr documentation, renamed old regex to new pattern, #6452 by @miili\nChange GenerateJsonSchema.generate_definitions signature, #6436 by @dmontagu\n\nSee the full changelog here\nv2.0.1 (2023-07-04)\nGitHub release\nFirst patch release of Pydantic V2\n\nExtra fields added via setattr (i.e. m.some_extra_field = 'extra_value')\nare added to .model_extra if model_config extra='allowed'. Fixed #6333, #6365 by @aaraney\nAutomatically unpack JSON schema '$ref' for custom types, #6343 by @adriangb\nFix tagged unions multiple processing in submodels, #6340 by @suharnikov\n\nSee the full changelog here\nv2.0 (2023-06-30)\nGitHub release\nPydantic V2 is here! :tada:\nSee this post for more details.\nv2.0b3 (2023-06-16)\nThird beta pre-release of Pydantic V2\nSee the full changelog here\nv2.0b2 (2023-06-03)\nAdd from_attributes runtime flag to TypeAdapter.validate_python and BaseModel.model_validate.\nSee the full changelog here\nv2.0b1 (2023-06-01)\nFirst beta pre-release of Pydantic V2\nSee the full changelog here\nv2.0a4 (2023-05-05)\nFourth pre-release of Pydantic V2\nSee the full changelog here\nv2.0a3 (2023-04-20)\nThird pre-release of Pydantic V2\nSee the full changelog here\nv2.0a2 (2023-04-12)\nSecond pre-release of Pydantic V2\nSee the full changelog here\nv2.0a1 (2023-04-03)\nFirst pre-release of Pydantic V2!\nSee this post for more details.\nv1.10.12 (2023-07-24)\n\nFixes the maxlen property being dropped on deque validation. Happened only if the deque item has been typed. Changes the _validate_sequence_like func, #6581 by @maciekglowka\n\nv1.10.11 (2023-07-04)\n\nImporting create_model in tools.py through relative path instead of absolute path - so that it doesn't import V2 code when copied over to V2 branch, #6361 by @SharathHuddar\n\nv1.10.10 (2023-06-30)\n\nAdd Pydantic Json field support to settings management, #6250 by @hramezani\nFixed literal validator errors for unhashable values, #6188 by @markus1978\nFixed bug with generics receiving forward refs, #6130 by @mark-todd\nUpdate install method of FastAPI for internal tests in CI, #6117 by @Kludex\n\nv1.10.9 (2023-06-07)\n\nFix trailing zeros not ignored in Decimal validation, #5968 by @hramezani\nFix mypy plugin for v1.4.0, #5928 by @cdce8p\nAdd future and past date hypothesis strategies, #5850 by @bschoenmaeckers\nDiscourage usage of Cython 3 with Pydantic 1.x, #5845 by @lig\n\nv1.10.8 (2023-05-23)\n\nFix a bug in Literal usage with typing-extension==4.6.0, #5826 by @hramezani\nThis solves the (closed) issue #3849 where aliased fields that use discriminated union fail to validate when the data contains the non-aliased field name, #5736 by @benwah\nUpdate email-validator dependency to >=2.0.0post2, #5627 by @adriangb\nupdate AnyClassMethod for changes in python/typeshed#9771, #5505 by @ITProKyle\n\nv1.10.7 (2023-03-22)\n\nFix creating schema from model using ConstrainedStr with regex as dict key, #5223 by @matejetz\nAddress bug in mypy plugin caused by explicit_package_bases=True, #5191 by @dmontagu\nAdd implicit defaults in the mypy plugin for Field with no default argument, #5190 by @dmontagu\nFix schema generated for Enum values used as Literals in discriminated unions, #5188 by @javibookline\nFix mypy failures caused by the pydantic mypy plugin when users define from_orm in their own classes, #5187 by @dmontagu\nFix InitVar usage with pydantic dataclasses, mypy version 1.1.1 and the custom mypy plugin, #5162 by @cdce8p\n\nv1.10.6 (2023-03-08)\n\nImplement logic to support creating validators from non standard callables by using defaults to identify them and unwrapping functools.partial and functools.partialmethod when checking the signature, #5126 by @JensHeinrich\nFix mypy plugin for v1.1.1, and fix dataclass_transform decorator for pydantic dataclasses, #5111 by @cdce8p\nRaise ValidationError, not ConfigError, when a discriminator value is unhashable, #4773 by @kurtmckee\n\nv1.10.5 (2023-02-15)\n\nFix broken parametrized bases handling with GenericModels with complex sets of models, #5052 by @MarkusSintonen\nInvalidate mypy cache if plugin config changes, #5007 by @cdce8p\nFix RecursionError when deep-copying dataclass types wrapped by pydantic, #4949 by @mbillingr\nFix X | Y union syntax breaking GenericModel, #4146 by @thenx\nSwitch coverage badge to show coverage for this branch/release, #5060 by @samuelcolvin\n\nv1.10.4 (2022-12-30)\n\nChange dependency to typing-extensions>=4.2.0, #4885 by @samuelcolvin\n\nv1.10.3 (2022-12-29)\nNOTE: v1.10.3 was \"yanked\" from PyPI due to #4885 which is fixed in v1.10.4\n\nfix parsing of custom root models, #4883 by @gou177\nfix: use dataclass proxy for frozen or empty dataclasses, #4878 by @PrettyWood\nFix schema and schema_json on models where a model instance is a one of default values, #4781 by @Bobronium\nAdd Jina AI to sponsors on docs index page, #4767 by @samuelcolvin\nfix: support assignment on DataclassProxy, #4695 by @PrettyWood\nAdd postgresql+psycopg as allowed scheme for PostgreDsn to make it usable with SQLAlchemy 2, #4689 by @morian\nAllow dict schemas to have both patternProperties and additionalProperties, #4641 by @jparise\nFixes error passing None for optional lists with unique_items, #4568 by @mfulgo\nFix GenericModel with Callable param raising a TypeError, #4551 by @mfulgo\nFix field regex with StrictStr type annotation, #4538 by @sisp\nCorrect dataclass_transform keyword argument name from field_descriptors to field_specifiers, #4500 by @samuelcolvin\nfix: avoid multiple calls of __post_init__ when dataclasses are inherited, #4487 by @PrettyWood\nReduce the size of binary wheels, #2276 by @samuelcolvin\n\nv1.10.2 (2022-09-05)\n\nRevert Change: Revert percent encoding of URL parts which was originally added in #4224, #4470 by @samuelcolvin\nPrevent long (length > 4_300) strings/bytes as input to int fields, see\npython/cpython#95778 and\nCVE-2020-10735, #1477 by @samuelcolvin\nfix: dataclass wrapper was not always called, #4477 by @PrettyWood\nUse tomllib on Python 3.11 when parsing mypy configuration, #4476 by @hauntsaninja\nBasic fix of GenericModel cache to detect order of arguments in Union models, #4474 by @sveinugu\nFix mypy plugin when using bare types like list and dict as default_factory, #4457 by @samuelcolvin\n\nv1.10.1 (2022-08-31)\n\nAdd __hash__ method to pydancic.color.Color class, #4454 by @czaki\n\nv1.10.0 (2022-08-30)\n\nRefactor the whole pydantic dataclass decorator to really act like its standard lib equivalent.\nIt hence keeps __eq__, __hash__, ... and makes comparison with its non-validated version possible.\nIt also fixes usage of frozen dataclasses in fields and usage of default_factory in nested dataclasses.\nThe support of Config.extra has been added.\nFinally, config customization directly via a dict is now possible, #2557 by @PrettyWood\n\nBREAKING CHANGES:\n\nThe compiled boolean (whether pydantic is compiled with cython) has been moved from main.py to version.py\nNow that Config.extra is supported, dataclass ignores by default extra arguments (like BaseModel)\n\n\nFix PEP487 __set_name__ protocol in BaseModel for PrivateAttrs, #4407 by @tlambert03\nAllow for custom parsing of environment variables via parse_env_var in Config, #4406 by @acmiyaguchi\nRename master to main, #4405 by @hramezani\nFix StrictStr does not raise ValidationError when max_length is present in Field, #4388 by @hramezani\nMake SecretStr and SecretBytes hashable, #4387 by @chbndrhnns\nFix StrictBytes does not raise ValidationError when max_length is present in Field, #4380 by @JeanArhancet\nAdd support for bare type, #4375 by @hramezani\nSupport Python 3.11, including binaries for 3.11 in PyPI, #4374 by @samuelcolvin\nAdd support for re.Pattern, #4366 by @hramezani\nFix __post_init_post_parse__ is incorrectly passed keyword arguments when no __post_init__ is defined, #4361 by @hramezani\nFix implicitly importing ForwardRef and Callable from pydantic.typing instead of typing and also expose MappingIntStrAny, #4358 by @aminalaee\nremove Any types from the dataclass decorator so it can be used with the disallow_any_expr mypy option, #4356 by @DetachHead\nmoved repo to pydantic/pydantic, #4348 by @yezz123\nfix \"extra fields not permitted\" error when dataclass with Extra.forbid is validated multiple times, #4343 by @detachhead\nAdd Python 3.9 and 3.10 examples to docs, #4339 by @Bobronium\nDiscriminated union models now use oneOf instead of anyOf when generating OpenAPI schema definitions, #4335 by @MaxwellPayne\nAllow type checkers to infer inner type of Json type. Json[list[str]] will be now inferred as list[str],\nJson[Any] should be used instead of plain Json.\nRuntime behaviour is not changed, #4332 by @Bobronium\nAllow empty string aliases by using a alias is not None check, rather than bool(alias), #4253 by @sergeytsaplin\nUpdate ForwardRefs in Field.outer_type_, #4249 by @JacobHayes\nThe use of __dataclass_transform__ has been replaced by typing_extensions.dataclass_transform, which is the preferred way to mark pydantic models as a dataclass under PEP 681, #4241 by @multimeric\nUse parent model's Config when validating nested NamedTuple fields, #4219 by @synek\nUpdate BaseModel.construct to work with aliased Fields, #4192 by @kylebamos\nCatch certain raised errors in smart_deepcopy and revert to deepcopy if so, #4184 by @coneybeare\nAdd Config.anystr_upper and to_upper kwarg to constr and conbytes, #4165 by @satheler\nFix JSON schema for set and frozenset when they include default values, #4155 by @aminalaee\nTeach the mypy plugin that methods decorated by @validator are classmethods, #4102 by @DMRobertson\nImprove mypy plugin's ability to detect required fields, #4086 by @richardxia\nSupport fields of type Type[] in schema, #4051 by @aminalaee\nAdd default value in JSON Schema when const=True, #4031 by @aminalaee\nAdds reserved word check to signature generation logic, #4011 by @strue36\nFix Json strategy failure for the complex nested field, #4005 by @sergiosim\nAdd JSON-compatible float constraint allow_inf_nan, #3994 by @tiangolo\nRemove undefined behaviour when env_prefix had characters in common with env_nested_delimiter, #3975 by @arsenron\nSupport generics model with create_model, #3945 by @hot123s\nallow submodels to overwrite extra field info, #3934 by @PrettyWood\nDocument and test structural pattern matching (PEP 636) on BaseModel, #3920 by @irgolic\nFix incorrect deserialization of python timedelta object to ISO 8601 for negative time deltas.\nMinus was serialized in incorrect place (\"P-1DT23H59M59.888735S\" instead of correct \"-P1DT23H59M59.888735S\"), #3899 by @07pepa\nFix validation of discriminated union fields with an alias when passing a model instance, #3846 by @chornsby\nAdd a CockroachDsn type to validate CockroachDB connection strings. The type\nsupports the following schemes: cockroachdb, cockroachdb+psycopg2 and cockroachdb+asyncpg, #3839 by @blubber\nFix MyPy plugin to not override pre-existing __init__ method in models, #3824 by @patrick91\nFix mypy version checking, #3783 by @KotlinIsland\nsupport overwriting dunder attributes of BaseModel instances, #3777 by @PrettyWood\nAdded ConstrainedDate and condate, #3740 by @hottwaj\nSupport kw_only in dataclasses, #3670 by @detachhead\nAdd comparison method for Color class, #3646 by @aminalaee\nDrop support for python3.6, associated cleanup, #3605 by @samuelcolvin\ncreated new function to_lower_camel() for \"non pascal case\" camel case, #3463 by @schlerp\nAdd checks to default and default_factory arguments in Mypy plugin, #3430 by @klaa97\nfix mangling of inspect.signature for BaseModel, #3413 by @fix-inspect-signature\nAdds the SecretField abstract class so that all the current and future secret fields like SecretStr and SecretBytes will derive from it, #3409 by @expobrain\nSupport multi hosts validation in PostgresDsn, #3337 by @rglsk\nFix parsing of very small numeric timedelta values, #3315 by @samuelcolvin\nUpdate SecretsSettingsSource to respect config.case_sensitive, #3273 by @JeanArhancet\nAdd MongoDB network data source name (DSN) schema, #3229 by @snosratiershad\nAdd support for multiple dotenv files, #3222 by @rekyungmin\nRaise an explicit ConfigError when multiple fields are incorrectly set for a single validator, #3215 by @SunsetOrange\nAllow ellipsis on Fields inside Annotated for TypedDicts required, #3133 by @ezegomez\nCatch overflow errors in int_validator, #3112 by @ojii\nAdds a __rich_repr__ method to Representation class which enables pretty printing with Rich, #3099 by @willmcgugan\nAdd percent encoding in AnyUrl and descendent types, #3061 by @FaresAhmedb\nvalidate_arguments decorator now supports alias, #3019 by @MAD-py\nAvoid __dict__ and __weakref__ attributes in AnyUrl and IP address fields, #2890 by @nuno-andre\nAdd ability to use Final in a field type annotation, #2766 by @uriyyo\nUpdate requirement to typing_extensions>=4.1.0 to guarantee dataclass_transform is available, #4424 by @commonism\nAdd Explosion and AWS to main sponsors, #4413 by @samuelcolvin\nUpdate documentation for copy_on_model_validation to reflect recent changes, #4369 by @samuelcolvin\nRuntime warning if __slots__ is passed to create_model, __slots__ is then ignored, #4432 by @samuelcolvin\nAdd type hints to BaseSettings.Config to avoid mypy errors, also correct mypy version compatibility notice in docs, #4450 by @samuelcolvin\n\nv1.10.0b1 (2022-08-24)\nPre-release, see the GitHub release for details.\nv1.10.0a2 (2022-08-24)\nPre-release, see the GitHub release for details.\nv1.10.0a1 (2022-08-22)\nPre-release, see the GitHub release for details.\nv1.9.2 (2022-08-11)\nRevert Breaking Change: v1.9.1 introduced a breaking change where model fields were\ndeep copied by default, this release reverts the default behaviour to match v1.9.0 and before,\nwhile also allow deep-copy behaviour via copy_on_model_validation = 'deep'. See #4092 for more information.\n\nAllow for shallow copies of model fields, Config.copy_on_model_validation is now a str which must be\n'none', 'deep', or 'shallow' corresponding to not copying, deep copy & shallow copy; default 'shallow',\n#4093 by @timkpaine\n\nv1.9.1 (2022-05-19)\nThank you to pydantic's sponsors:\n@tiangolo, @stellargraph, @JonasKs, @grillazz, @Mazyod, @kevinalh, @chdsbd, @povilasb, @povilasb, @jina-ai,\n@mainframeindustries, @robusta-dev, @SendCloud, @rszamszur, @jodal, @hardbyte, @corleyma, @daddycocoaman,\n@Rehket, @jokull, @reillysiemens, @westonsteimel, @primer-io, @koxudaxi, @browniebroke, @stradivari96,\n@adriangb, @kamalgill, @jqueguiner, @dev-zero, @datarootsio, @RedCarpetUp\nfor their kind support.\n\nLimit the size of generics._generic_types_cache and generics._assigned_parameters\nto avoid unlimited increase in memory usage, #4083 by @samuelcolvin\nAdd Jupyverse and FPS as Jupyter projects using pydantic, #4082 by @davidbrochart\nSpeedup __isinstancecheck__ on pydantic models when the type is not a model, may also avoid memory \"leaks\", #4081 by @samuelcolvin\nFix in-place modification of FieldInfo that caused problems with PEP 593 type aliases, #4067 by @adriangb\nAdd support for autocomplete in VS Code via __dataclass_transform__ when using pydantic.dataclasses.dataclass, #4006 by @giuliano-oliveira\nRemove benchmarks from codebase and docs, #3973 by @samuelcolvin\nTyping checking with pyright in CI, improve docs on vscode/pylance/pyright, #3972 by @samuelcolvin\nFix nested Python dataclass schema regression, #3819 by @himbeles\nUpdate documentation about lazy evaluation of sources for Settings, #3806 by @garyd203\nPrevent subclasses of bytes being converted to bytes, #3706 by @samuelcolvin\nFixed \"error checking inheritance of\" when using PEP585 and PEP604 type hints, #3681 by @aleksul\nAllow self referencing ClassVars in models, #3679 by @samuelcolvin\nBreaking Change, see #4106: Fix issue with self-referencing dataclass, #3675 by @uriyyo\nInclude non-standard port numbers in rendered URLs, #3652 by @dolfinus\nConfig.copy_on_model_validation does a deep copy and not a shallow one, #3641 by @PrettyWood\nfix: clarify that discriminated unions do not support singletons, #3636 by @tommilligan\nAdd read_text(encoding='utf-8') for setup.py, #3625 by @hswong3i\nFix JSON Schema generation for Discriminated Unions within lists, #3608 by @samuelcolvin\n\nv1.9.0 (2021-12-31)\nThank you to pydantic's sponsors:\n@sthagen, @timdrijvers, @toinbis, @koxudaxi, @ginomempin, @primer-io, @and-semakin, @westonsteimel, @reillysiemens,\n@es3n1n, @jokull, @JonasKs, @Rehket, @corleyma, @daddycocoaman, @hardbyte, @datarootsio, @jodal, @aminalaee, @rafsaf,\n@jqueguiner, @chdsbd, @kevinalh, @Mazyod, @grillazz, @JonasKs, @simw, @leynier, @xfenix\nfor their kind support.\nHighlights\n\nadd Python 3.10 support, #2885 by @PrettyWood\nDiscriminated unions, #619 by @PrettyWood\nConfig.smart_union for better union logic, #2092 by @PrettyWood\nBinaries for Macos M1 CPUs, #3498 by @samuelcolvin\nComplex types can be set via nested environment variables, e.g. foo___bar, #3159 by @Air-Mark\nadd a dark mode to pydantic documentation, #2913 by @gbdlin\nAdd support for autocomplete in VS Code via __dataclass_transform__, #2721 by @tiangolo\nAdd \"exclude\" as a field parameter so that it can be configured using model config, #660 by @daviskirk\n\nv1.9.0 (2021-12-31) Changes\n\nApply update_forward_refs to Config.json_encodes prevent name clashes in types defined via strings, #3583 by @samuelcolvin\nExtend pydantic's mypy plugin to support mypy versions 0.910, 0.920, 0.921 & 0.930, #3573 & #3594 by @PrettyWood, @christianbundy, @samuelcolvin\n\nv1.9.0a2 (2021-12-24) Changes\n\nsupport generic models with discriminated union, #3551 by @PrettyWood\nkeep old behaviour of json() by default, #3542 by @PrettyWood\nRemoved typing-only __root__ attribute from BaseModel, #3540 by @layday\nBuild Python 3.10 wheels, #3539 by @mbachry\nFix display of extra fields with model __repr__, #3234 by @cocolman\nmodels copied via Config.copy_on_model_validation always have all fields, #3201 by @PrettyWood\nnested ORM from nested dictionaries, #3182 by @PrettyWood\nfix link to discriminated union section by @PrettyWood\n\nv1.9.0a1 (2021-12-18) Changes\n\nAdd support for Decimal-specific validation configurations in Field(), additionally to using condecimal(),\nto allow better support from editors and tooling, #3507 by @tiangolo\nAdd arm64 binaries suitable for MacOS with an M1 CPU to PyPI, #3498 by @samuelcolvin\nFix issue where None was considered invalid when using a Union type containing Any or object, #3444 by @tharradine\nWhen generating field schema, pass optional field argument (of type\npydantic.fields.ModelField) to __modify_schema__() if present, #3434 by @jasujm\nFix issue when pydantic fail to parse typing.ClassVar string type annotation, #3401 by @uriyyo\nMention Python >= 3.9.2 as an alternative to typing_extensions.TypedDict, #3374 by @BvB93\nChanged the validator method name in the Custom Errors example\nto more accurately describe what the validator is doing; changed from name_must_contain_space to  value_must_equal_bar, #3327 by @michaelrios28\nAdd AmqpDsn class, #3254 by @kludex\nAlways use Enum value as default in generated JSON schema, #3190 by @joaommartins\nAdd support for Mypy 0.920, #3175 by @christianbundy\nvalidate_arguments now supports extra customization (used to always be Extra.forbid), #3161 by @PrettyWood\nComplex types can be set by nested environment variables, #3159 by @Air-Mark\nFix mypy plugin to collect fields based on pydantic.utils.is_valid_field so that it ignores untyped private variables, #3146 by @hi-ogawa\nfix validate_arguments issue with Config.validate_all, #3135 by @PrettyWood\navoid dict coercion when using dict subclasses as field type, #3122 by @PrettyWood\nadd support for object type, #3062 by @PrettyWood\nUpdates pydantic dataclasses to keep _special properties on parent classes, #3043 by @zulrang\nAdd a TypedDict class for error objects, #3038 by @matthewhughes934\nFix support for using a subclass of an annotation as a default, #3018 by @JacobHayes\nmake create_model_from_typeddict mypy compliant, #3008 by @PrettyWood\nMake multiple inheritance work when using PrivateAttr, #2989 by @hmvp\nParse environment variables as JSON, if they have a Union type with a complex subfield, #2936 by @cbartz\nPrevent StrictStr permitting Enum values where the enum inherits from str, #2929 by @samuelcolvin\nMake SecretsSettingsSource parse values being assigned to fields of complex types when sourced from a secrets file,\njust as when sourced from environment variables, #2917 by @davidmreed\nadd a dark mode to pydantic documentation, #2913 by @gbdlin\nMake pydantic-mypy plugin compatible with pyproject.toml configuration, consistent with mypy changes.\nSee the doc for more information, #2908 by @jrwalk\nadd Python 3.10 support, #2885 by @PrettyWood\nCorrectly parse generic models with Json[T], #2860 by @geekingfrog\nUpdate contrib docs re: Python version to use for building docs, #2856 by @paxcodes\nClarify documentation about pydantic's support for custom validation and strict type checking,\ndespite pydantic being primarily a parsing library, #2855 by @paxcodes\nFix schema generation for Deque fields, #2810 by @sergejkozin\nfix an edge case when mixing constraints and Literal, #2794 by @PrettyWood\nFix postponed annotation resolution for NamedTuple and TypedDict when they're used directly as the type of fields\nwithin Pydantic models, #2760 by @jameysharp\nFix bug when mypy plugin fails on construct method call for BaseSettings derived classes, #2753 by @uriyyo\nAdd function overloading for a pydantic.create_model function, #2748 by @uriyyo\nFix mypy plugin issue with self field declaration, #2743 by @uriyyo\nThe colon at the end of the line \"The fields which were supplied when user was initialised:\" suggests that the code following it is related.\nChanged it to a period, #2733 by @krisaoe\nRenamed variable schema to schema_ to avoid shadowing of global variable name, #2724 by @shahriyarr\nAdd support for autocomplete in VS Code via __dataclass_transform__, #2721 by @tiangolo\nadd missing type annotations in BaseConfig and handle max_length = 0, #2719 by @PrettyWood\nChange orm_mode checking to allow recursive ORM mode parsing with dicts, #2718 by @nuno-andre\nAdd episode 313 of the Talk Python To Me podcast, where Michael Kennedy and Samuel Colvin discuss Pydantic, to the docs, #2712 by @RatulMaharaj\nfix JSON schema generation when a field is of type NamedTuple and has a default value, #2707 by @PrettyWood\nEnum fields now properly support extra kwargs in schema generation, #2697 by @sammchardy\nBreaking Change, see #3780: Make serialization of referenced pydantic models possible, #2650 by @PrettyWood\nAdd uniqueItems option to ConstrainedList, #2618 by @nuno-andre\nTry to evaluate forward refs automatically at model creation, #2588 by @uriyyo\nSwitch docs preview and coverage display to use smokeshow, #2580 by @samuelcolvin\nAdd __version__ attribute to pydantic module, #2572 by @paxcodes\nAdd postgresql+asyncpg, postgresql+pg8000, postgresql+psycopg2, postgresql+psycopg2cffi, postgresql+py-postgresql\nand postgresql+pygresql schemes for PostgresDsn, #2567 by @postgres-asyncpg\nEnable the Hypothesis plugin to generate a constrained decimal when the decimal_places argument is specified, #2524 by @cwe5590\nAllow collections.abc.Callable to be used as type in Python 3.9, #2519 by @daviskirk\nDocumentation update how to custom compile pydantic when using pip install, small change in setup.py\nto allow for custom CFLAGS when compiling, #2517 by @peterroelants\nremove side effect of default_factory to run it only once even if Config.validate_all is set, #2515 by @PrettyWood\nAdd lookahead to ip regexes for AnyUrl hosts. This allows urls with DNS labels\nlooking like IPs to validate as they are perfectly valid host names, #2512 by @sbv-csis\nSet minItems and maxItems in generated JSON schema for fixed-length tuples, #2497 by @PrettyWood\nAdd strict argument to conbytes, #2489 by @koxudaxi\nSupport user defined generic field types in generic models, #2465 by @daviskirk\nAdd an example and a short explanation of subclassing GetterDict to docs, #2463 by @nuno-andre\nadd KafkaDsn type, HttpUrl now has default port 80 for http and 443 for https, #2447 by @MihanixA\nAdd PastDate and FutureDate types, #2425 by @Kludex\nSupport generating schema for Generic fields with subtypes, #2375 by @maximberg\nfix(encoder): serialize NameEmail to str, #2341 by @alecgerona\nadd Config.smart_union to prevent coercion in Union if possible, see\nthe doc for more information, #2092 by @PrettyWood\nAdd ability to use typing.Counter as a model field type, #2060 by @uriyyo\nAdd parameterised subclasses to __bases__ when constructing new parameterised classes, so that A <: B => A[int] <: B[int], #2007 by @diabolo-dan\nCreate FileUrl type that allows URLs that conform to RFC 8089.\nAdd host_required parameter, which is True by default (AnyUrl and subclasses), False in RedisDsn, FileUrl, #1983 by @vgerak\nadd confrozenset(), analogous to conset() and conlist(), #1897 by @PrettyWood\nstop calling parent class root_validator if overridden, #1895 by @PrettyWood\nAdd repr (defaults to True) parameter to Field, to hide it from the default representation of the BaseModel, #1831 by @fnep\nAccept empty query/fragment URL parts, #1807 by @xavier\n\nv1.8.2 (2021-05-11)\n!!! warning\nA security vulnerability, level \"moderate\" is fixed in v1.8.2. Please upgrade ASAP.\nSee security advisory CVE-2021-29510\n\nSecurity fix: Fix date and datetime parsing so passing either 'infinity' or float('inf')\n(or their negative values) does not cause an infinite loop,\nsee security advisory CVE-2021-29510\nfix schema generation with Enum by generating a valid name, #2575 by @PrettyWood\nfix JSON schema generation with a Literal of an enum member, #2536 by @PrettyWood\nFix bug with configurations declarations that are passed as\nkeyword arguments during class creation, #2532 by @uriyyo\nAllow passing json_encoders in class kwargs, #2521 by @layday\nsupport arbitrary types with custom __eq__, #2483 by @PrettyWood\nsupport Annotated in validate_arguments and in generic models with Python 3.9, #2483 by @PrettyWood\n\nv1.8.1 (2021-03-03)\nBug fixes for regressions and new features from v1.8\n\nallow elements of Config.field to update elements of a Field, #2461 by @samuelcolvin\nfix validation with a BaseModel field and a custom root type, #2449 by @PrettyWood\nexpose Pattern encoder to fastapi, #2444 by @PrettyWood\nenable the Hypothesis plugin to generate a constrained float when the multiple_of argument is specified, #2442 by @tobi-lipede-oodle\nAvoid RecursionError when using some types like Enum or Literal with generic models, #2436 by @PrettyWood\ndo not overwrite declared __hash__ in subclasses of a model, #2422 by @PrettyWood\nfix mypy complaints on Path and UUID related custom types, #2418 by @PrettyWood\nSupport properly variable length tuples of compound types, #2416 by @PrettyWood\n\nv1.8 (2021-02-26)\nThank you to pydantic's sponsors:\n@jorgecarleitao, @BCarley, @chdsbd, @tiangolo, @matin, @linusg, @kevinalh, @koxudaxi, @timdrijvers, @mkeen, @meadsteve,\n@ginomempin, @primer-io, @and-semakin, @tomthorogood, @AjitZK, @westonsteimel, @Mazyod, @christippett, @CarlosDomingues,\n@Kludex, @r-m-n\nfor their kind support.\nHighlights\n\nHypothesis plugin for testing, #2097 by @Zac-HD\nsupport for NamedTuple and TypedDict, #2216 by @PrettyWood\nSupport Annotated hints on model fields, #2147 by @JacobHayes\nfrozen parameter on Config to allow models to be hashed, #1880 by @rhuille\n\nChanges\n\nBreaking Change, remove old deprecation aliases from v1, #2415 by @samuelcolvin:\n\nremove notes on migrating to v1 in docs\nremove Schema which was replaced by Field\nremove Config.case_insensitive which was replaced by Config.case_sensitive (default False)\nremove Config.allow_population_by_alias which was replaced by Config.allow_population_by_field_name\nremove model.fields which was replaced by model.__fields__\nremove model.to_string() which was replaced by str(model)\nremove model.__values__ which was replaced by model.__dict__\n\n\nBreaking Change: always validate only first sublevel items with each_item.\nThere were indeed some edge cases with some compound types where the validated items were the last sublevel ones, #1933 by @PrettyWood\nUpdate docs extensions to fix local syntax highlighting, #2400 by @daviskirk\nfix: allow utils.lenient_issubclass to handle typing.GenericAlias objects like list[str] in Python >= 3.9, #2399 by @daviskirk\nImprove field declaration for pydantic dataclass by allowing the usage of pydantic Field or 'metadata' kwarg of dataclasses.field, #2384 by @PrettyWood\nMaking typing-extensions a required dependency, #2368 by @samuelcolvin\nMake resolve_annotations more lenient, allowing for missing modules, #2363 by @samuelcolvin\nAllow configuring models through class kwargs, #2356 by @Bobronium\nPrevent Mapping subclasses from always being coerced to dict, #2325 by @ofek\nfix: allow None for type Optional[conset / conlist], #2320 by @PrettyWood\nSupport empty tuple type, #2318 by @PrettyWood\nfix: python_requires metadata to require >=3.6.1, #2306 by @hukkinj1\nProperly encode Decimal with, or without any decimal places, #2293 by @hultner\nfix: update __fields_set__ in BaseModel.copy(update=\u2026), #2290 by @PrettyWood\nfix: keep order of fields with BaseModel.construct(), #2281 by @PrettyWood\nSupport generating schema for Generic fields, #2262 by @maximberg\nFix validate_decorator so **kwargs doesn't exclude values when the keyword\nhas the same name as the *args or **kwargs names, #2251 by @cybojenix\nPrevent overriding positional arguments with keyword arguments in\nvalidate_arguments, as per behaviour with native functions, #2249 by @cybojenix\nadd documentation for con* type functions, #2242 by @tayoogunbiyi\nSupport custom root type (aka __root__) when using parse_obj() with nested models, #2238 by @PrettyWood\nSupport custom root type (aka __root__) with from_orm(), #2237 by @PrettyWood\nensure cythonized functions are left untouched when creating models, based on #1944 by @kollmats, #2228 by @samuelcolvin\nResolve forward refs for stdlib dataclasses converted into pydantic ones, #2220 by @PrettyWood\nAdd support for NamedTuple and TypedDict types.\nThose two types are now handled and validated when used inside BaseModel or pydantic dataclass.\nTwo utils are also added create_model_from_namedtuple and create_model_from_typeddict, #2216 by @PrettyWood\nDo not ignore annotated fields when type is Union[Type[...], ...], #2213 by @PrettyWood\nRaise a user-friendly TypeError when a root_validator does not return a dict (e.g. None), #2209 by @masalim2\nAdd a FrozenSet[str] type annotation to the allowed_schemes argument on the strict_url field type, #2198 by @Midnighter\nadd allow_mutation constraint to Field, #2195 by @sblack-usu\nAllow Field with a default_factory to be used as an argument to a function\ndecorated with validate_arguments, #2176 by @thomascobb\nAllow non-existent secrets directory by only issuing a warning, #2175 by @davidolrik\nfix URL regex to parse fragment without query string, #2168 by @andrewmwhite\nfix: ensure to always return one of the values in Literal field type, #2166 by @PrettyWood\nSupport typing.Annotated hints on model fields. A Field may now be set in the type hint with Annotated[..., Field(...); all other annotations are ignored but still visible with get_type_hints(..., include_extras=True), #2147 by @JacobHayes\nAdded StrictBytes type as well as strict=False option to ConstrainedBytes, #2136 by @rlizzo\nadded Config.anystr_lower and to_lower kwarg to constr and conbytes, #2134 by @tayoogunbiyi\nSupport plain typing.Tuple type, #2132 by @PrettyWood\nAdd a bound method validate to functions decorated with validate_arguments\nto validate parameters without actually calling the function, #2127 by @PrettyWood\nAdd the ability to customize settings sources (add / disable / change priority order), #2107 by @kozlek\nFix mypy complaints about most custom pydantic types, #2098 by @PrettyWood\nAdd a Hypothesis plugin for easier property-based testing with Pydantic's custom types - usage details here, #2097 by @Zac-HD\nadd validator for None, NoneType or Literal[None], #2095 by @PrettyWood\nHandle properly fields of type Callable with a default value, #2094 by @PrettyWood\nUpdated create_model return type annotation to return type which inherits from __base__ argument, #2071 by @uriyyo\nAdd merged json_encoders inheritance, #2064 by @art049\nallow overwriting ClassVars in sub-models without having to re-annotate them, #2061 by @layday\nadd default encoder for Pattern type, #2045 by @PrettyWood\nAdd NonNegativeInt, NonPositiveInt, NonNegativeFloat, NonPositiveFloat, #1975 by @mdavis-xyz\nUse % for percentage in string format of colors, #1960 by @EdwardBetts\nFixed issue causing KeyError to be raised when building schema from multiple BaseModel with the same names declared in separate classes, #1912 by @JSextonn\nAdd rediss (Redis over SSL) protocol to RedisDsn\nAllow URLs without user part (e.g., rediss://:pass@localhost), #1911 by @TrDex\nAdd a new frozen boolean parameter to Config (default: False).\nSetting frozen=True does everything that allow_mutation=False does, and also generates a __hash__() method for the model. This makes instances of the model potentially hashable if all the attributes are hashable, #1880 by @rhuille\nfix schema generation with multiple Enums having the same name, #1857 by @PrettyWood\nAdded support for 13/19 digits VISA credit cards in PaymentCardNumber type, #1416 by @AlexanderSov\nfix: prevent RecursionError while using recursive GenericModels, #1370 by @xppt\nuse enum for typing.Literal in JSON schema, #1350 by @PrettyWood\nFix: some recursive models did not require update_forward_refs and silently behaved incorrectly, #1201 by @PrettyWood\nFix bug where generic models with fields where the typevar is nested in another type a: List[T] are considered to be concrete. This allows these models to be subclassed and composed as expected, #947 by @daviskirk\nAdd Config.copy_on_model_validation flag. When set to False, pydantic will keep models used as fields\nuntouched on validation instead of reconstructing (copying) them, #265 by @PrettyWood\n\nv1.7.4 (2021-05-11)\n\nSecurity fix: Fix date and datetime parsing so passing either 'infinity' or float('inf')\n(or their negative values) does not cause an infinite loop,\nSee security advisory CVE-2021-29510\n\nv1.7.3 (2020-11-30)\nThank you to pydantic's sponsors:\n@timdrijvers, @BCarley, @chdsbd, @tiangolo, @matin, @linusg, @kevinalh, @jorgecarleitao, @koxudaxi, @primer-api,\n@mkeen, @meadsteve for their kind support.\n\nfix: set right default value for required (optional) fields, #2142 by @PrettyWood\nfix: support underscore_attrs_are_private with generic models, #2138 by @PrettyWood\nfix: update all modified field values in root_validator when validate_assignment is on, #2116 by @PrettyWood\nAllow pickling of pydantic.dataclasses.dataclass dynamically created from a built-in dataclasses.dataclass, #2111 by @aimestereo\nFix a regression where Enum fields would not propagate keyword arguments to the schema, #2109 by @bm424\nIgnore __doc__ as private attribute when Config.underscore_attrs_are_private is set, #2090 by @PrettyWood\n\nv1.7.2 (2020-11-01)\n\nfix slow GenericModel concrete model creation, allow GenericModel concrete name reusing in module, #2078 by @Bobronium\nkeep the order of the fields when validate_assignment is set, #2073 by @PrettyWood\nforward all the params of the stdlib dataclass when converted into pydantic dataclass, #2065 by @PrettyWood\n\nv1.7.1 (2020-10-28)\nThank you to pydantic's sponsors:\n@timdrijvers, @BCarley, @chdsbd, @tiangolo, @matin, @linusg, @kevinalh, @jorgecarleitao, @koxudaxi, @primer-api, @mkeen\nfor their kind support.\n\nfix annotation of validate_arguments when passing configuration as argument, #2055 by @layday\nFix mypy assignment error when using PrivateAttr, #2048 by @aphedges\nfix underscore_attrs_are_private causing TypeError when overriding __init__, #2047 by @samuelcolvin\nFixed regression introduced in v1.7 involving exception handling in field validators when validate_assignment=True, #2044 by @johnsabath\nfix: pydantic dataclass can inherit from stdlib dataclass\nand Config.arbitrary_types_allowed is supported, #2042 by @PrettyWood\n\nv1.7 (2020-10-26)\nThank you to pydantic's sponsors:\n@timdrijvers, @BCarley, @chdsbd, @tiangolo, @matin, @linusg, @kevinalh, @jorgecarleitao, @koxudaxi, @primer-api\nfor their kind support.\nHighlights\n\nPython 3.9 support, thanks @PrettyWood\nPrivate model attributes, thanks @Bobronium\n\"secrets files\" support in BaseSettings, thanks @mdgilene\nconvert stdlib dataclasses to pydantic dataclasses and use stdlib dataclasses in models, thanks @PrettyWood\n\nChanges\n\nBreaking Change: remove __field_defaults__, add default_factory support with BaseModel.construct.\nUse .get_default() method on fields in __fields__ attribute instead, #1732 by @PrettyWood\nRearrange CI to run linting as a separate job, split install recipes for different tasks, #2020 by @samuelcolvin\nAllows subclasses of generic models to make some, or all, of the superclass's type parameters concrete, while\nalso defining new type parameters in the subclass, #2005 by @choogeboom\nCall validator with the correct values parameter type in BaseModel.__setattr__,\nwhen validate_assignment = True in model config, #1999 by @me-ransh\nForce fields.Undefined to be a singleton object, fixing inherited generic model schemas, #1981 by @daviskirk\nInclude tests in source distributions, #1976 by @sbraz\nAdd ability to use min_length/max_length constraints with secret types, #1974 by @uriyyo\nAlso check root_validators when validate_assignment is on, #1971 by @PrettyWood\nFix const validators not running when custom validators are present, #1957 by @hmvp\nadd deque to field types, #1935 by @wozniakty\nadd basic support for Python 3.9, #1832 by @PrettyWood\nFix typo in the anchor of exporting_models.md#modelcopy and incorrect description, #1821 by @KimMachineGun\nAdded ability for BaseSettings to read \"secret files\", #1820 by @mdgilene\nadd parse_raw_as utility function, #1812 by @PrettyWood\nSupport home directory relative paths for dotenv files (e.g. ~/.env), #1803 by @PrettyWood\nClarify documentation for parse_file to show that the argument\nshould be a file path not a file-like object, #1794 by @mdavis-xyz\nFix false positive from mypy plugin when a class nested within a BaseModel is named Model, #1770 by @selimb\nadd basic support of Pattern type in schema generation, #1767 by @PrettyWood\nSupport custom title, description and default in schema of enums, #1748 by @PrettyWood\nProperly represent Literal Enums when use_enum_values is True, #1747 by @noelevans\nAllows timezone information to be added to strings to be formatted as time objects. Permitted formats are Z for UTC\nor an offset for absolute positive or negative time shifts. Or the timezone data can be omitted, #1744 by @noelevans\nAdd stub __init__ with Python 3.6 signature for ForwardRef, #1738 by @sirtelemak\nFix behaviour with forward refs and optional fields in nested models, #1736 by @PrettyWood\nadd Enum and IntEnum as valid types for fields, #1735 by @PrettyWood\nChange default value of __module__ argument of create_model from None to 'pydantic.main'.\nSet reference of created concrete model to it's module to allow pickling (not applied to models created in\nfunctions), #1686 by @Bobronium\nAdd private attributes support, #1679 by @Bobronium\nadd config to @validate_arguments, #1663 by @samuelcolvin\nAllow descendant Settings models to override env variable names for the fields defined in parent Settings models with\nenv in their Config. Previously only env_prefix configuration option was applicable, #1561 by @ojomio\nSupport ref_template when creating schema $refs, #1479 by @kilo59\nAdd a __call__ stub to PyObject so that mypy will know that it is callable, #1352 by @brianmaissy\npydantic.dataclasses.dataclass decorator now supports built-in dataclasses.dataclass.\nIt is hence possible to convert an existing dataclass easily to add Pydantic validation.\nMoreover nested dataclasses are also supported, #744 by @PrettyWood\n\nv1.6.2 (2021-05-11)\n\nSecurity fix: Fix date and datetime parsing so passing either 'infinity' or float('inf')\n(or their negative values) does not cause an infinite loop,\nSee security advisory CVE-2021-29510\n\nv1.6.1 (2020-07-15)\n\nfix validation and parsing of nested models with default_factory, #1710 by @PrettyWood\n\nv1.6 (2020-07-11)\nThank you to pydantic's sponsors: @matin, @tiangolo, @chdsbd, @jorgecarleitao, and 1 anonymous sponsor for their kind support.\n\nModify validators for conlist and conset to not have always=True, #1682 by @samuelcolvin\nadd port check to AnyUrl (can't exceed 65536) ports are 16 insigned bits: 0 <= port <= 2**16-1 src: rfc793 header format, #1654 by @flapili\nDocument default regex anchoring semantics, #1648 by @yurikhan\nUse chain.from_iterable in class_validators.py. This is a faster and more idiomatic way of using itertools.chain.\nInstead of computing all the items in the iterable and storing them in memory, they are computed one-by-one and never\nstored as a huge list. This can save on both runtime and memory space, #1642 by @cool-RR\nAdd conset(), analogous to conlist(), #1623 by @patrickkwang\nmake Pydantic errors (un)pickable, #1616 by @PrettyWood\nAllow custom encoding for dotenv files, #1615 by @PrettyWood\nEnsure SchemaExtraCallable is always defined to get type hints on BaseConfig, #1614 by @PrettyWood\nUpdate datetime parser to support negative timestamps, #1600 by @mlbiche\nUpdate mypy, remove AnyType alias for Type[Any], #1598 by @samuelcolvin\nAdjust handling of root validators so that errors are aggregated from all failing root validators, instead of reporting on only the first root validator to fail, #1586 by @beezee\nMake __modify_schema__ on Enums apply to the enum schema rather than fields that use the enum, #1581 by @therefromhere\nFix behavior of __all__ key when used in conjunction with index keys in advanced include/exclude of fields that are sequences, #1579 by @xspirus\nSubclass validators do not run when referencing a List field defined in a parent class when each_item=True. Added an example to the docs illustrating this, #1566 by @samueldeklund\nchange schema.field_class_to_schema to support frozenset in schema, #1557 by @wangpeibao\nCall __modify_schema__ only for the field schema, #1552 by @PrettyWood\nMove the assignment of field.validate_always in fields.py so the always parameter of validators work on inheritance, #1545 by @dcHHH\nAdded support for UUID instantiation through 16 byte strings such as b'\\x12\\x34\\x56\\x78' * 4. This was done to support BINARY(16) columns in sqlalchemy, #1541 by @shawnwall\nAdd a test assertion that default_factory can return a singleton, #1523 by @therefromhere\nAdd NameEmail.__eq__ so duplicate NameEmail instances are evaluated as equal, #1514 by @stephen-bunn\nAdd datamodel-code-generator link in pydantic document site, #1500 by @koxudaxi\nAdded a \"Discussion of Pydantic\" section to the documentation, with a link to \"Pydantic Introduction\" video by Alexander Hultn\u00e9r, #1499 by @hultner\nAvoid some side effects of default_factory by calling it only once\nif possible and by not setting a default value in the schema, #1491 by @PrettyWood\nAdded docs about dumping dataclasses to JSON, #1487 by @mikegrima\nMake BaseModel.__signature__ class-only, so getting __signature__ from model instance will raise AttributeError, #1466 by @Bobronium\ninclude 'format': 'password' in the schema for secret types, #1424 by @atheuz\nModify schema constraints on ConstrainedFloat so that exclusiveMinimum and\nminimum are not included in the schema if they are equal to -math.inf and\nexclusiveMaximum and maximum are not included if they are equal to math.inf, #1417 by @vdwees\nSquash internal __root__ dicts in .dict() (and, by extension, in .json()), #1414 by @patrickkwang\nMove const validator to post-validators so it validates the parsed value, #1410 by @selimb\nFix model validation to handle nested literals, e.g. Literal['foo', Literal['bar']], #1364 by @DBCerigo\nRemove user_required = True from RedisDsn, neither user nor password are required, #1275 by @samuelcolvin\nRemove extra allOf from schema for fields with Union and custom Field, #1209 by @mostaphaRoudsari\nUpdates OpenAPI schema generation to output all enums as separate models.\nInstead of inlining the enum values in the model schema, models now use a $ref\nproperty to point to the enum definition, #1173 by @calvinwyoung\n\nv1.5.1 (2020-04-23)\n\nSignature generation with extra: allow never uses a field name, #1418 by @prettywood\nAvoid mutating Field default value, #1412 by @prettywood\n\nv1.5 (2020-04-18)\n\nMake includes/excludes arguments for .dict(), ._iter(), ..., immutable, #1404 by @AlexECX\nAlways use a field's real name with includes/excludes in model._iter(), regardless of by_alias, #1397 by @AlexECX\nUpdate constr regex example to include start and end lines, #1396 by @lmcnearney\nConfirm that shallow model.copy() does make a shallow copy of attributes, #1383 by @samuelcolvin\nRenaming model_name argument of main.create_model() to __model_name to allow using model_name as a field name, #1367 by @kittipatv\nReplace raising of exception to silent passing  for non-Var attributes in mypy plugin, #1345 by @b0g3r\nRemove typing_extensions dependency for Python 3.8, #1342 by @prettywood\nMake SecretStr and SecretBytes initialization idempotent, #1330 by @atheuz\ndocument making secret types dumpable using the json method, #1328 by @atheuz\nMove all testing and build to github actions, add windows and macos binaries,\nthank you @StephenBrown2 for much help, #1326 by @samuelcolvin\nfix card number length check in PaymentCardNumber, PaymentCardBrand now inherits from str, #1317 by @samuelcolvin\nHave BaseModel inherit from Representation to make mypy happy when overriding __str__, #1310 by @FuegoFro\nAllow None as input to all optional list fields, #1307 by @prettywood\nAdd datetime field to default_factory example, #1301 by @StephenBrown2\nAllow subclasses of known types to be encoded with superclass encoder, #1291 by @StephenBrown2\nExclude exported fields from all elements of a list/tuple of submodels/dicts with '__all__', #1286 by @masalim2\nAdd pydantic.color.Color objects as available input for Color fields, #1258 by @leosussan\nIn examples, type nullable fields as Optional, so that these are valid mypy annotations, #1248 by @kokes\nMake pattern_validator() accept pre-compiled Pattern objects. Fix str_validator() return type to str, #1237 by @adamgreg\nDocument how to manage Generics and inheritance, #1229 by @esadruhn\nupdate_forward_refs() method of BaseModel now copies __dict__ of class module instead of modyfying it, #1228 by @paul-ilyin\nSupport instance methods and class methods with @validate_arguments, #1222 by @samuelcolvin\nAdd default_factory argument to Field to create a dynamic default value by passing a zero-argument callable, #1210 by @prettywood\nadd support for NewType of List, Optional, etc, #1207 by @Kazy\nfix mypy signature for root_validator, #1192 by @samuelcolvin\nFixed parsing of nested 'custom root type' models, #1190 by @Shados\nAdd validate_arguments function decorator which checks the arguments to a function matches type annotations, #1179 by @samuelcolvin\nAdd __signature__ to models, #1034 by @Bobronium\nRefactor ._iter() method, 10x speed boost for dict(model), #1017 by @Bobronium\n\nv1.4 (2020-01-24)\n\nBreaking Change: alias precedence logic changed so aliases on a field always take priority over\nan alias from alias_generator to avoid buggy/unexpected behaviour,\nsee here for details, #1178 by @samuelcolvin\nAdd support for unicode and punycode in TLDs, #1182 by @jamescurtin\nFix cls argument in validators during assignment, #1172 by @samuelcolvin\ncompleting Luhn algorithm for PaymentCardNumber, #1166 by @cuencandres\nadd support for generics that implement __get_validators__ like a custom data type, #1159 by @tiangolo\nadd support for infinite generators with Iterable, #1152 by @tiangolo\nfix url_regex to accept schemas with +, - and . after the first character, #1142 by @samuelcolvin\nmove version_info() to version.py, suggest its use in issues, #1138 by @samuelcolvin\nImprove pydantic import time by roughly 50% by deferring some module loading and regex compilation, #1127 by @samuelcolvin\nFix EmailStr and NameEmail to accept instances of themselves in cython, #1126 by @koxudaxi\nPass model class to the Config.schema_extra callable, #1125 by @therefromhere\nFix regex for username and password in URLs, #1115 by @samuelcolvin\nAdd support for nested generic models, #1104 by @dmontagu\nadd __all__ to __init__.py to prevent \"implicit reexport\" errors from mypy, #1072 by @samuelcolvin\nAdd support for using \"dotenv\" files with BaseSettings, #1011 by @acnebs\n\nv1.3 (2019-12-21)\n\nChange schema and schema_model to handle dataclasses by using their __pydantic_model__ feature, #792 by @aviramha\nAdded option for root_validator to be skipped if values validation fails using keyword skip_on_failure=True, #1049 by @aviramha\nAllow Config.schema_extra to be a callable so that the generated schema can be post-processed, #1054 by @selimb\nUpdate mypy to version 0.750, #1057 by @dmontagu\nTrick Cython into allowing str subclassing, #1061 by @skewty\nPrevent type attributes being added to schema unless the attribute __schema_attributes__ is True, #1064 by @samuelcolvin\nChange BaseModel.parse_file to use Config.json_loads, #1067 by @kierandarcy\nFix for optional Json fields, #1073 by @volker48\nChange the default number of threads used when compiling with cython to one,\nallow override via the CYTHON_NTHREADS environment variable, #1074 by @samuelcolvin\nRun FastAPI tests during Pydantic's CI tests, #1075 by @tiangolo\nMy mypy strictness constraints, and associated tweaks to type annotations, #1077 by @samuelcolvin\nAdd __eq__ to SecretStr and SecretBytes to allow \"value equals\", #1079 by @sbv-trueenergy\nFix schema generation for nested None case, #1088 by @lutostag\nConsistent checks for sequence like objects, #1090 by @samuelcolvin\nFix Config inheritance on BaseSettings when used with env_prefix, #1091 by @samuelcolvin\nFix for __modify_schema__ when it conflicted with field_class_to_schema*, #1102 by @samuelcolvin\ndocs: Fix explanation of case sensitive environment variable names when populating BaseSettings subclass attributes, #1105 by @tribals\nRename django-rest-framework benchmark in documentation, #1119 by @frankie567\n\nv1.2 (2019-11-28)\n\nPossible Breaking Change: Add support for required Optional with name: Optional[AnyType] = Field(...)\nand refactor ModelField creation to preserve required parameter value, #1031 by @tiangolo;\nsee here for details\nAdd benchmarks for cattrs, #513 by @sebastianmika\nAdd exclude_none option to dict() and friends, #587 by @niknetniko\nAdd benchmarks for valideer, #670 by @gsakkis\nAdd parse_obj_as and parse_file_as functions for ad-hoc parsing of data into arbitrary pydantic-compatible types, #934 by @dmontagu\nAdd allow_reuse argument to validators, thus allowing validator reuse, #940 by @dmontagu\nAdd support for mapping types for custom root models, #958 by @dmontagu\nMypy plugin support for dataclasses, #966 by @koxudaxi\nAdd support for dataclasses default factory, #968 by @ahirner\nAdd a ByteSize type for converting byte string (1GB) to plain bytes, #977 by @dgasmith\nFix mypy complaint about @root_validator(pre=True), #984 by @samuelcolvin\nAdd manylinux binaries for Python 3.8 to pypi, also support manylinux2010, #994 by @samuelcolvin\nAdds ByteSize conversion to another unit, #995 by @dgasmith\nFix __str__ and __repr__ inheritance for models, #1022 by @samuelcolvin\nadd testimonials section to docs, #1025 by @sullivancolin\nAdd support for typing.Literal for Python 3.8, #1026 by @dmontagu\n\nv1.1.1 (2019-11-20)\n\nFix bug where use of complex fields on sub-models could cause fields to be incorrectly configured, #1015 by @samuelcolvin\n\nv1.1 (2019-11-07)\n\nAdd a mypy plugin for type checking BaseModel.__init__ and more, #722 by @dmontagu\nChange return type typehint for GenericModel.__class_getitem__ to prevent PyCharm warnings, #936 by @dmontagu\nFix usage of Any to allow None, also support TypeVar thus allowing use of un-parameterised collection types\ne.g. Dict and List, #962 by @samuelcolvin\nSet FieldInfo on subfields to fix schema generation for complex nested types, #965 by @samuelcolvin\n\nv1.0 (2019-10-23)\n\nBreaking Change: deprecate the Model.fields property, use Model.__fields__ instead, #883 by @samuelcolvin\nBreaking Change: Change the precedence of aliases so child model aliases override parent aliases,\nincluding using alias_generator, #904 by @samuelcolvin\nBreaking change: Rename skip_defaults to exclude_unset, and add ability to exclude actual defaults, #915 by @dmontagu\nAdd **kwargs to pydantic.main.ModelMetaclass.__new__ so __init_subclass__ can take custom parameters on extended\nBaseModel classes, #867 by @retnikt\nFix field of a type that has a default value, #880 by @koxudaxi\nUse FutureWarning instead of DeprecationWarning when alias instead of env is used for settings models, #881 by @samuelcolvin\nFix issue with BaseSettings inheritance and alias getting set to None, #882 by @samuelcolvin\nModify __repr__ and __str__ methods to be consistent across all public classes, add __pretty__ to support\npython-devtools, #884 by @samuelcolvin\ndeprecation warning for case_insensitive on BaseSettings config, #885 by @samuelcolvin\nFor BaseSettings merge environment variables and in-code values recursively, as long as they create a valid object\nwhen merged together, to allow splitting init arguments, #888 by @idmitrievsky\nchange secret types example, #890 by @ashears\nChange the signature of Model.construct() to be more user-friendly, document construct() usage, #898 by @samuelcolvin\nAdd example for the construct() method, #907 by @ashears\nImprove use of Field constraints on complex types, raise an error if constraints are not enforceable,\nalso support tuples with an ellipsis Tuple[X, ...], Sequence and FrozenSet in schema, #909 by @samuelcolvin\nupdate docs for bool missing valid value, #911 by @trim21\nBetter str/repr logic for ModelField, #912 by @samuelcolvin\nFix ConstrainedList, update schema generation to reflect min_items and max_items Field() arguments, #917 by @samuelcolvin\nAllow abstracts sets (eg. dict keys) in the include and exclude arguments of dict(), #921 by @samuelcolvin\nFix JSON serialization errors on ValidationError.json() by using pydantic_encoder, #922 by @samuelcolvin\nClarify usage of remove_untouched, improve error message for types with no validators, #926 by @retnikt\n\nv1.0b2 (2019-10-07)\n\nMark StrictBool typecheck as bool to allow for default values without mypy errors, #690 by @dmontagu\nTransfer the documentation build from sphinx to mkdocs, re-write much of the documentation, #856 by @samuelcolvin\nAdd support for custom naming schemes for GenericModel subclasses, #859 by @dmontagu\nAdd if TYPE_CHECKING: to the excluded lines for test coverage, #874 by @dmontagu\nRename allow_population_by_alias to allow_population_by_field_name, remove unnecessary warning about it, #875 by @samuelcolvin\n\nv1.0b1 (2019-10-01)\n\nBreaking Change: rename Schema to Field, make it a function to placate mypy, #577 by @samuelcolvin\nBreaking Change: modify parsing behavior for bool, #617 by @dmontagu\nBreaking Change: get_validators is no longer recognised, use __get_validators__.\nConfig.ignore_extra and Config.allow_extra are no longer recognised, use Config.extra, #720 by @samuelcolvin\nBreaking Change: modify default config settings for BaseSettings; case_insensitive renamed to case_sensitive,\ndefault changed to case_sensitive = False, env_prefix default changed to '' - e.g. no prefix, #721 by @dmontagu\nBreaking change: Implement root_validator and rename root errors from __obj__ to __root__, #729 by @samuelcolvin\nBreaking Change: alter the behaviour of dict(model) so that sub-models are nolonger\nconverted to dictionaries, #733 by @samuelcolvin\nBreaking change: Added initvars support to post_init_post_parse, #748 by @Raphael-C-Almeida\nBreaking Change: Make BaseModel.json() only serialize the __root__ key for models with custom root, #752 by @dmontagu\nBreaking Change: complete rewrite of URL parsing logic, #755 by @samuelcolvin\nBreaking Change: preserve superclass annotations for field-determination when not provided in subclass, #757 by @dmontagu\nBreaking Change: BaseSettings now uses the special env settings to define which environment variables to\nread, not aliases, #847 by @samuelcolvin\nadd support for assert statements inside validators, #653 by @abdusco\nUpdate documentation to specify the use of pydantic.dataclasses.dataclass and subclassing pydantic.BaseModel, #710 by @maddosaurus\nAllow custom JSON decoding and encoding via json_loads and json_dumps Config properties, #714 by @samuelcolvin\nmake all annotated fields occur in the order declared, #715 by @dmontagu\nuse pytest to test mypy integration, #735 by @dmontagu\nadd __repr__ method to ErrorWrapper, #738 by @samuelcolvin\nAdded support for FrozenSet members in dataclasses, and a better error when attempting to use types from the typing module that are not supported by Pydantic, #745 by @djpetti\nadd documentation for Pycharm Plugin, #750 by @koxudaxi\nfix broken examples in the docs, #753 by @dmontagu\nmoving typing related objects into pydantic.typing, #761 by @samuelcolvin\nMinor performance improvements to ErrorWrapper, ValidationError and datetime parsing, #763 by @samuelcolvin\nImprovements to datetime/date/time/timedelta types: more descriptive errors,\nchange errors to value_error not type_error, support bytes, #766 by @samuelcolvin\nfix error messages for Literal types with multiple allowed values, #770 by @dmontagu\nImproved auto-generated title field in JSON schema by converting underscore to space, #772 by @skewty\nsupport mypy --no-implicit-reexport for dataclasses, also respect --no-implicit-reexport in pydantic itself, #783 by @samuelcolvin\nadd the PaymentCardNumber type, #790 by @matin\nFix const validations for lists, #794 by @hmvp\nSet additionalProperties to false in schema for models with extra fields disallowed, #796 by @Code0x58\nEmailStr validation method now returns local part case-sensitive per RFC 5321, #798 by @henriklindgren\nAdded ability to validate strictness to ConstrainedFloat, ConstrainedInt and ConstrainedStr and added\nStrictFloat and StrictInt classes, #799 by @DerRidda\nImprove handling of None and Optional, replace whole with each_item (inverse meaning, default False)\non validators, #803 by @samuelcolvin\nadd support for Type[T] type hints, #807 by @timonbimon\nPerformance improvements from removing change_exceptions, change how pydantic error are constructed, #819 by @samuelcolvin\nFix the error message arising when a BaseModel-type model field causes a ValidationError during parsing, #820 by @dmontagu\nallow getter_dict on Config, modify GetterDict to be more like a Mapping object and thus easier to work with, #821 by @samuelcolvin\nOnly check TypeVar param on base GenericModel class, #842 by @zpencerq\nrename Model._schema_cache -> Model.__schema_cache__, Model._json_encoder -> Model.__json_encoder__,\nModel._custom_root_type -> Model.__custom_root_type__, #851 by @samuelcolvin\n\n... see here for earlier changes.\n", "description": "Data validation and settings management using Python type hints."}, {"name": "pycryptodomex", "readme": "\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\n\n\nPyCryptodome\nPyCryptodome is a self-contained Python package of low-level\ncryptographic primitives.\nIt supports Python 2.7, Python 3.5 and newer, and PyPy.\nThe installation procedure depends on the package you want the library to be in.\nPyCryptodome can be used as:\n\nan almost drop-in replacement for the old PyCrypto library.\nYou install it with:\npip install pycryptodome\n\nIn this case, all modules are installed under the Crypto package.\nOne must avoid having both PyCrypto and PyCryptodome installed\nat the same time, as they will interfere with each other.\nThis option is therefore recommended only when you are sure that\nthe whole application is deployed in a virtualenv.\n\na library independent of the old PyCrypto.\nYou install it with:\npip install pycryptodomex\n\nIn this case, all modules are installed under the Cryptodome package.\nPyCrypto and PyCryptodome can coexist.\n\n\nFor faster public key operations in Unix, you should install GMP in your system.\nPyCryptodome is a fork of PyCrypto. It brings the following enhancements\nwith respect to the last official version of PyCrypto (2.6.1):\n\nAuthenticated encryption modes (GCM, CCM, EAX, SIV, OCB)\nAccelerated AES on Intel platforms via AES-NI\nFirst class support for PyPy\nElliptic curves cryptography (NIST P-curves; Ed25519, Ed448)\nBetter and more compact API (nonce and iv attributes for ciphers,\nautomatic generation of random nonces and IVs, simplified CTR cipher mode,\nand more)\nSHA-3 hash algorithms (FIPS 202) and derived functions (NIST SP-800 185):\nSHAKE128 and SHA256 XOFs\ncSHAKE128 and cSHAKE256 XOFs\nKMAC128 and KMAC256\nTupleHash128 and TupleHash256\n\n\nKangarooTwelve XOF (derived from Keccak)\nTruncated hash algorithms SHA-512/224 and SHA-512/256 (FIPS 180-4)\nBLAKE2b and BLAKE2s hash algorithms\nSalsa20 and ChaCha20/XChaCha20 stream ciphers\nPoly1305 MAC\nChaCha20-Poly1305 and XChaCha20-Poly1305 authenticated ciphers\nscrypt, bcrypt, HKDF, and NIST SP 800 108r1 Counter Mode key derivation functions\nDeterministic (EC)DSA and EdDSA\nPassword-protected PKCS#8 key containers\nShamir's Secret Sharing scheme\nRandom numbers get sourced directly from the OS (and not from a CSPRNG in userspace)\nSimplified install process, including better support for Windows\nCleaner RSA and DSA key generation (largely based on FIPS 186-4)\nMajor clean ups and simplification of the code base\n\nPyCryptodome is not a wrapper to a separate C library like OpenSSL.\nTo the largest possible extent, algorithms are implemented in pure Python.\nOnly the pieces that are extremely critical to performance (e.g. block ciphers)\nare implemented as C extensions.\nFor more information, see the homepage.\nFor security issues, please send an email to security@pycryptodome.org.\nAll the code can be downloaded from GitHub.\n\n\n", "description": "Cryptographic library implementing ciphers, hashes and public key algorithms."}, {"name": "pycryptodome", "readme": "\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\n\n\nPyCryptodome\nPyCryptodome is a self-contained Python package of low-level\ncryptographic primitives.\nIt supports Python 2.7, Python 3.5 and newer, and PyPy.\nThe installation procedure depends on the package you want the library to be in.\nPyCryptodome can be used as:\n\nan almost drop-in replacement for the old PyCrypto library.\nYou install it with:\npip install pycryptodome\n\nIn this case, all modules are installed under the Crypto package.\nOne must avoid having both PyCrypto and PyCryptodome installed\nat the same time, as they will interfere with each other.\nThis option is therefore recommended only when you are sure that\nthe whole application is deployed in a virtualenv.\n\na library independent of the old PyCrypto.\nYou install it with:\npip install pycryptodomex\n\nIn this case, all modules are installed under the Cryptodome package.\nPyCrypto and PyCryptodome can coexist.\n\n\nFor faster public key operations in Unix, you should install GMP in your system.\nPyCryptodome is a fork of PyCrypto. It brings the following enhancements\nwith respect to the last official version of PyCrypto (2.6.1):\n\nAuthenticated encryption modes (GCM, CCM, EAX, SIV, OCB)\nAccelerated AES on Intel platforms via AES-NI\nFirst class support for PyPy\nElliptic curves cryptography (NIST P-curves; Ed25519, Ed448)\nBetter and more compact API (nonce and iv attributes for ciphers,\nautomatic generation of random nonces and IVs, simplified CTR cipher mode,\nand more)\nSHA-3 hash algorithms (FIPS 202) and derived functions (NIST SP-800 185):\nSHAKE128 and SHA256 XOFs\ncSHAKE128 and cSHAKE256 XOFs\nKMAC128 and KMAC256\nTupleHash128 and TupleHash256\n\n\nKangarooTwelve XOF (derived from Keccak)\nTruncated hash algorithms SHA-512/224 and SHA-512/256 (FIPS 180-4)\nBLAKE2b and BLAKE2s hash algorithms\nSalsa20 and ChaCha20/XChaCha20 stream ciphers\nPoly1305 MAC\nChaCha20-Poly1305 and XChaCha20-Poly1305 authenticated ciphers\nscrypt, bcrypt, HKDF, and NIST SP 800 108r1 Counter Mode key derivation functions\nDeterministic (EC)DSA and EdDSA\nPassword-protected PKCS#8 key containers\nShamir's Secret Sharing scheme\nRandom numbers get sourced directly from the OS (and not from a CSPRNG in userspace)\nSimplified install process, including better support for Windows\nCleaner RSA and DSA key generation (largely based on FIPS 186-4)\nMajor clean ups and simplification of the code base\n\nPyCryptodome is not a wrapper to a separate C library like OpenSSL.\nTo the largest possible extent, algorithms are implemented in pure Python.\nOnly the pieces that are extremely critical to performance (e.g. block ciphers)\nare implemented as C extensions.\nFor more information, see the homepage.\nFor security issues, please send an email to security@pycryptodome.org.\nAll the code can be downloaded from GitHub.\n\n\n", "description": "Cryptographic library for Python", "category": "Cryptography"}, {"name": "pycparser", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npycparser v2.21\n1\u00a0\u00a0\u00a0Introduction\n1.1\u00a0\u00a0\u00a0What is pycparser?\n1.2\u00a0\u00a0\u00a0What is it good for?\n1.3\u00a0\u00a0\u00a0Which version of C does pycparser support?\n1.4\u00a0\u00a0\u00a0What grammar does pycparser follow?\n1.5\u00a0\u00a0\u00a0How is pycparser licensed?\n1.6\u00a0\u00a0\u00a0Contact details\n2\u00a0\u00a0\u00a0Installing\n2.1\u00a0\u00a0\u00a0Prerequisites\n2.2\u00a0\u00a0\u00a0Installation process\n3\u00a0\u00a0\u00a0Using\n3.1\u00a0\u00a0\u00a0Interaction with the C preprocessor\n3.2\u00a0\u00a0\u00a0What about the standard C library headers?\n3.3\u00a0\u00a0\u00a0Basic usage\n3.4\u00a0\u00a0\u00a0Advanced usage\n4\u00a0\u00a0\u00a0Modifying\n5\u00a0\u00a0\u00a0Package contents\n6\u00a0\u00a0\u00a0Contributors\n\n\n\n\n\nREADME.rst\n\n\n\n\npycparser v2.21\n\n\n\n\nContents\n\n1\u00a0\u00a0\u00a0Introduction\n1.1\u00a0\u00a0\u00a0What is pycparser?\n1.2\u00a0\u00a0\u00a0What is it good for?\n1.3\u00a0\u00a0\u00a0Which version of C does pycparser support?\n1.4\u00a0\u00a0\u00a0What grammar does pycparser follow?\n1.5\u00a0\u00a0\u00a0How is pycparser licensed?\n1.6\u00a0\u00a0\u00a0Contact details\n\n\n2\u00a0\u00a0\u00a0Installing\n2.1\u00a0\u00a0\u00a0Prerequisites\n2.2\u00a0\u00a0\u00a0Installation process\n\n\n3\u00a0\u00a0\u00a0Using\n3.1\u00a0\u00a0\u00a0Interaction with the C preprocessor\n3.2\u00a0\u00a0\u00a0What about the standard C library headers?\n3.3\u00a0\u00a0\u00a0Basic usage\n3.4\u00a0\u00a0\u00a0Advanced usage\n\n\n4\u00a0\u00a0\u00a0Modifying\n5\u00a0\u00a0\u00a0Package contents\n6\u00a0\u00a0\u00a0Contributors\n\n\n\n1\u00a0\u00a0\u00a0Introduction\n\n1.1\u00a0\u00a0\u00a0What is pycparser?\npycparser is a parser for the C language, written in pure Python. It is a\nmodule designed to be easily integrated into applications that need to parse\nC source code.\n\n1.2\u00a0\u00a0\u00a0What is it good for?\nAnything that needs C code to be parsed. The following are some uses for\npycparser, taken from real user reports:\n\nC code obfuscator\nFront-end for various specialized C compilers\nStatic code checker\nAutomatic unit-test discovery\nAdding specialized extensions to the C language\n\nOne of the most popular uses of pycparser is in the cffi library, which uses it to parse the\ndeclarations of C functions and types in order to auto-generate FFIs.\npycparser is unique in the sense that it's written in pure Python - a very\nhigh level language that's easy to experiment with and tweak. To people familiar\nwith Lex and Yacc, pycparser's code will be simple to understand. It also\nhas no external dependencies (except for a Python interpreter), making it very\nsimple to install and deploy.\n\n1.3\u00a0\u00a0\u00a0Which version of C does pycparser support?\npycparser aims to support the full C99 language (according to the standard\nISO/IEC 9899). Some features from C11 are also supported, and patches to support\nmore are welcome.\npycparser supports very few GCC extensions, but it's fairly easy to set\nthings up so that it parses code with a lot of GCC-isms successfully. See the\nFAQ for more details.\n\n1.4\u00a0\u00a0\u00a0What grammar does pycparser follow?\npycparser very closely follows the C grammar provided in Annex A of the C99\nstandard (ISO/IEC 9899).\n\n1.5\u00a0\u00a0\u00a0How is pycparser licensed?\nBSD license.\n\n1.6\u00a0\u00a0\u00a0Contact details\nFor reporting problems with pycparser or submitting feature requests, please\nopen an issue, or submit a\npull request.\n\n2\u00a0\u00a0\u00a0Installing\n\n2.1\u00a0\u00a0\u00a0Prerequisites\n\npycparser was tested with Python 3.8+ on Linux, macOS and Windows.\npycparser has no external dependencies. The only non-stdlib library it\nuses is PLY, which is bundled in pycparser/ply. The current PLY version is\n3.10, retrieved from http://www.dabeaz.com/ply/\n\nNote that pycparser (and PLY) uses docstrings for grammar specifications.\nPython installations that strip docstrings (such as when using the Python\n-OO option) will fail to instantiate and use pycparser. You can try to\nwork around this problem by making sure the PLY parsing tables are pre-generated\nin normal mode; this isn't an officially supported/tested mode of operation,\nthough.\n\n2.2\u00a0\u00a0\u00a0Installation process\nThe recommended way to install pycparser is with pip:\n> pip install pycparser\n\n\n3\u00a0\u00a0\u00a0Using\n\n3.1\u00a0\u00a0\u00a0Interaction with the C preprocessor\nIn order to be compilable, C code must be preprocessed by the C preprocessor -\ncpp. cpp handles preprocessing directives like #include and\n#define, removes comments, and performs other minor tasks that prepare the C\ncode for compilation.\nFor all but the most trivial snippets of C code pycparser, like a C\ncompiler, must receive preprocessed C code in order to function correctly. If\nyou import the top-level parse_file function from the pycparser package,\nit will interact with cpp for you, as long as it's in your PATH, or you\nprovide a path to it.\nNote also that you can use gcc -E or clang -E instead of cpp. See\nthe using_gcc_E_libc.py example for more details. Windows users can download\nand install a binary build of Clang for Windows from this website.\n\n3.2\u00a0\u00a0\u00a0What about the standard C library headers?\nC code almost always #includes various header files from the standard C\nlibrary, like stdio.h. While (with some effort) pycparser can be made to\nparse the standard headers from any C compiler, it's much simpler to use the\nprovided \"fake\" standard includes for C11 in utils/fake_libc_include. These\nare standard C header files that contain only the bare necessities to allow\nvalid parsing of the files that use them. As a bonus, since they're minimal, it\ncan significantly improve the performance of parsing large C files.\nThe key point to understand here is that pycparser doesn't really care about\nthe semantics of types. It only needs to know whether some token encountered in\nthe source is a previously defined type. This is essential in order to be able\nto parse C correctly.\nSee this blog post\nfor more details.\nNote that the fake headers are not included in the pip package nor installed\nvia setup.py (#224).\n\n3.3\u00a0\u00a0\u00a0Basic usage\nTake a look at the examples directory of the distribution for a few examples\nof using pycparser. These should be enough to get you started. Please note\nthat most realistic C code samples would require running the C preprocessor\nbefore passing the code to pycparser; see the previous sections for more\ndetails.\n\n3.4\u00a0\u00a0\u00a0Advanced usage\nThe public interface of pycparser is well documented with comments in\npycparser/c_parser.py. For a detailed overview of the various AST nodes\ncreated by the parser, see pycparser/_c_ast.cfg.\nThere's also a FAQ available here.\nIn any case, you can always drop me an email for help.\n\n4\u00a0\u00a0\u00a0Modifying\nThere are a few points to keep in mind when modifying pycparser:\n\nThe code for pycparser's AST nodes is automatically generated from a\nconfiguration file - _c_ast.cfg, by _ast_gen.py. If you modify the AST\nconfiguration, make sure to re-generate the code. This can be done by running\nthe _build_tables.py script from the pycparser directory.\nMake sure you understand the optimized mode of pycparser - for that you\nmust read the docstring in the constructor of the CParser class. For\ndevelopment you should create the parser without optimizations, so that it\nwill regenerate the Yacc and Lex tables when you change the grammar.\n\n\n5\u00a0\u00a0\u00a0Package contents\nOnce you unzip the pycparser package, you'll see the following files and\ndirectories:\n\nREADME.rst:\nThis README file.\nLICENSE:\nThe pycparser license\nsetup.py:\nInstallation script\nexamples/:\nA directory with some examples of using pycparser\npycparser/:\nThe pycparser module source code.\ntests/:\nUnit tests.\nutils/fake_libc_include:\nMinimal standard C library include files that should allow to parse any C code.\nNote that these headers now include C11 code, so they may not work when the\npreprocessor is configured to an earlier C standard (like -std=c99).\nutils/internal/:\nInternal utilities for my own use. You probably don't need them.\n\n\n6\u00a0\u00a0\u00a0Contributors\nSome people have contributed to pycparser by opening issues on bugs they've\nfound and/or submitting patches. The list of contributors is in the CONTRIBUTORS\nfile in the source distribution. After pycparser moved to Github I stopped\nupdating this list because Github does a much better job at tracking\ncontributions.\n\n\n", "description": "C parser module written in pure Python."}, {"name": "pycountry", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npycountry\nData update policy\nDonations / Monetary Support\nContributions\nCountries (ISO 3166-1)\nHistoric Countries (ISO 3166-3)\nCountry subdivisions (ISO 3166-2)\nScripts (ISO 15924)\nCurrencies (ISO 4217)\nLanguages (ISO 639-3)\nLocales\nLookups\nPyInstaller Compatibility\n\n\n\n\n\nREADME.rst\n\n\n\n\npycountry\npycountry provides the ISO databases for the standards:\n\n639-3 Languages\n3166 Countries\n3166-3 Deleted countries\n3166-2 Subdivisions of countries\n4217 Currencies\n15924 Scripts\n\nThe package includes a copy from Debian's pkg-isocodes and makes the data\naccessible through a Python API.\nTranslation files for the various strings are included as well.\n\nData update policy\nNo changes to the data will be accepted into pycountry. This is a pure wrapper\naround the ISO standard using the pkg-isocodes database from Debian as is.\nIf you need changes to the political situation in the world, please talk to\nthe ISO or Debian people, not me.\n\nDonations / Monetary Support\nThis is a small project that I maintain in my personal time. I am not\ninterested in personal financial gain. However, if you would like to support\nthe project then I would love if you would donate to Feminist Frequency instead. Also, let the world know you\ndid so, so that others can follow your path.\n\nContributions\nThe code lives in a git repository on GitHub, and issues must be reported in there as well.\n\nCountries (ISO 3166-1)\nCountries are accessible through a database object that is already configured\nupon import of pycountry and works as an iterable:\n>>> import pycountry\n>>> len(pycountry.countries)\n249\n>>> list(pycountry.countries)[0]\nCountry(alpha_2='AF', alpha_3='AFG', name='Afghanistan', numeric='004', official_name='Islamic Republic of Afghanistan')\nSpecific countries can be looked up by their various codes and provide the\ninformation included in the standard as attributes:\n>>> germany = pycountry.countries.get(alpha_2='DE')\n>>> germany\nCountry(alpha_2='DE', alpha_3='DEU', name='Germany', numeric='276', official_name='Federal Republic of Germany')\n>>> germany.alpha_2\n'DE'\n>>> germany.alpha_3\n'DEU'\n>>> germany.numeric\n'276'\n>>> germany.name\n'Germany'\n>>> germany.official_name\n'Federal Republic of Germany'\nThere's also a \"fuzzy\" search to help people discover \"proper\" countries for\nnames that might only actually be subdivisions. The fuzziness also includes\nnormalizing unicode accents. There's also a bit of prioritization included\nto prefer matches on country names before subdivision names and have countries\nwith more matches be listed before ones with fewer matches:\n>>> pycountry.countries.search_fuzzy('England')\n[Country(alpha_2='GB', alpha_3='GBR', name='United Kingdom', numeric='826', official_name='United Kingdom of Great Britain and Northern Ireland')]\n\n>>> pycountry.countries.search_fuzzy('Cote')\n[Country(alpha_2='CI', alpha_3='CIV', name=\"C\u00f4te d'Ivoire\", numeric='384', official_name=\"Republic of C\u00f4te d'Ivoire\"),\n Country(alpha_2='FR', alpha_3='FRA', name='France', numeric='250', official_name='French Republic'),\n Country(alpha_2='HN', alpha_3='HND', name='Honduras', numeric='340', official_name='Republic of Honduras')]\n\nHistoric Countries (ISO 3166-3)\nThe historic_countries database contains former countries that have been\nremoved from the standard and are now included in ISO 3166-3, excluding\nexisting ones:\n>>> ussr = pycountry.historic_countries.get(alpha_3='SUN')\n>>> ussr\nCountry(alpha_3='SUN', alpha_4='SUHH', withdrawal_date='1992-08-30', name='USSR, Union of Soviet Socialist Republics', numeric='810')\n>>> ussr.alpha_4\n'SUHH'\n>>> ussr.alpha_3\n'SUN'\n>>> ussr.name\n'USSR, Union of Soviet Socialist Republics'\n>>> ussr.withdrawal_date\n'1992-08-30'\n\nCountry subdivisions (ISO 3166-2)\nThe country subdivisions are a little more complex than the countries itself\nbecause they provide a nested and typed structure.\nAll subdivisons can be accessed directly:\n>>> len(pycountry.subdivisions)\n4847\n>>> list(pycountry.subdivisions)[0]\nSubdivision(code='AD-07', country_code='AD', name='Andorra la Vella', parent_code=None, type='Parish')\nSubdivisions can be accessed using their unique code and provide at least\ntheir code, name and type:\n>>> de_st = pycountry.subdivisions.get(code='DE-ST')\n>>> de_st.code\n'DE-ST'\n>>> de_st.name\n'Sachsen-Anhalt'\n>>> de_st.type\n'State'\n>>> de_st.country\nCountry(alpha_2='DE', alpha_3='DEU', name='Germany', numeric='276', official_name='Federal Republic of Germany')\nSome subdivisions specify another subdivision as a parent:\n>>> al_br = pycountry.subdivisions.get(code='AL-BU')\n>>> al_br.code\n'AL-BU'\n>>> al_br.name\n'Bulqiz\\xeb'\n>>> al_br.type\n'District'\n>>> al_br.parent_code\n'AL-09'\n>>> al_br.parent\nSubdivision(code='AL-09', country_code='AL', name='Dib\\xebr', parent_code=None, type='County')\n>>> al_br.parent.name\n'Dib\\xebr'\nThe divisions of a single country can be queried using the country_code index:\n>>> len(pycountry.subdivisions.get(country_code='DE'))\n16\n\n>>> len(pycountry.subdivisions.get(country_code='US'))\n57\n\nScripts (ISO 15924)\nScripts are available from a database similar to the countries:\n>>> len(pycountry.scripts)\n169\n>>> list(pycountry.scripts)[0]\nScript(alpha_4='Afak', name='Afaka', numeric='439')\n\n>>> latin = pycountry.scripts.get(name='Latin')\n>>> latin\nScript(alpha_4='Latn', name='Latin', numeric='215')\n>>> latin.alpha4\n'Latn'\n>>> latin.name\n'Latin'\n>>> latin.numeric\n'215'\n\nCurrencies (ISO 4217)\nThe currencies database is, again, similar to the ones before:\n>>> len(pycountry.currencies)\n182\n>>> list(pycountry.currencies)[0]\nCurrency(alpha_3='AED', name='UAE Dirham', numeric='784')\n>>> argentine_peso = pycountry.currencies.get(alpha_3='ARS')\n>>> argentine_peso\nCurrency(alpha_3='ARS', name='Argentine Peso', numeric='032')\n>>> argentine_peso.alpha_3\n'ARS'\n>>> argentine_peso.name\n'Argentine Peso'\n>>> argentine_peso.numeric\n'032'\n\nLanguages (ISO 639-3)\nThe languages database is similar too:\n>>> len(pycountry.languages)\n7874\n>>> list(pycountry.languages)[0]\nLanguage(alpha_3='aaa', name='Ghotuo', scope='I', type='L')\n\n>>> aragonese = pycountry.languages.get(alpha_2='an')\n>>> aragonese.alpha_2\n'an'\n>>> aragonese.alpha_3\n'arg'\n>>> aragonese.name\n'Aragonese'\n\n>>> bengali = pycountry.languages.get(alpha_2='bn')\n>>> bengali.name\n'Bengali'\n>>> bengali.common_name\n'Bangla'\n\nLocales\nLocales are available in the pycountry.LOCALES_DIR subdirectory of this\npackage. The translation domains are called isoXXX according to the standard\nthey provide translations for. The directory is structured in a way compatible\nto Python's gettext module.\nHere is an example translating language names:\n>>> import gettext\n>>> german = gettext.translation('iso3166-1', pycountry.LOCALES_DIR,\n...                              languages=['de'])\n>>> german.install()\n>>> _('Germany')\n'Deutschland'\n\nLookups\nFor each database (countries, languages, scripts, etc.), you can also look up\nentities case insensitively without knowing which key the value may match.  For\nexample:\n>>> pycountry.countries.lookup('de')\n<pycountry.db.Country object at 0x...>\nThe search ends with the first match, which is returned.\n\nPyInstaller Compatibility\nSome users have reported issues using PyCountry with PyInstaller guidance on\nhow to handle the issues can be found in the PyInstaller Google Group.\n\n\n", "description": "ISO country, language and currency database for Python."}, {"name": "py", "readme": "\n\n\n\n\nNOTE: this library is in maintenance mode and should not be used in new code.\nThe py lib is a Python development support library featuring\nthe following tools and modules:\n\npy.path:  uniform local and svn path objects  -> please use pathlib/pathlib2 instead\npy.apipkg:  explicit API control and lazy-importing -> please use the standalone package instead\npy.iniconfig:  easy parsing of .ini files -> please use the standalone package instead\npy.code: dynamic code generation and introspection (deprecated, moved to pytest as a implementation detail).\n\nNOTE: prior to the 1.4 release this distribution used to\ncontain py.test which is now its own package, see https://docs.pytest.org\nFor questions and more information please visit https://py.readthedocs.io\nBugs and issues: https://github.com/pytest-dev/py\nAuthors: Holger Krekel and others, 2004-2017\n", "description": "Python development support library with path, ini parsing and code analysis tools."}, {"name": "pure-eval", "readme": "\n\n\n\nREADME.md\n\n\n\n\npure_eval\n  \nThis is a Python package that lets you safely evaluate certain AST nodes without triggering arbitrary code that may have unwanted side effects.\nIt can be installed from PyPI:\npip install pure_eval\n\nTo demonstrate usage, suppose we have an object defined as follows:\nclass Rectangle:\n    def __init__(self, width, height):\n        self.width = width\n        self.height = height\n\n    @property\n    def area(self):\n        print(\"Calculating area...\")\n        return self.width * self.height\n\n\nrect = Rectangle(3, 5)\nGiven the rect object, we want to evaluate whatever expressions we can in this source code:\nsource = \"(rect.width, rect.height, rect.area)\"\nThis library works with the AST, so let's parse the source code and peek inside:\nimport ast\n\ntree = ast.parse(source)\nthe_tuple = tree.body[0].value\nfor node in the_tuple.elts:\n    print(ast.dump(node))\nOutput:\nAttribute(value=Name(id='rect', ctx=Load()), attr='width', ctx=Load())\nAttribute(value=Name(id='rect', ctx=Load()), attr='height', ctx=Load())\nAttribute(value=Name(id='rect', ctx=Load()), attr='area', ctx=Load())\nNow to actually use the library. First construct an Evaluator:\nfrom pure_eval import Evaluator\n\nevaluator = Evaluator({\"rect\": rect})\nThe argument to Evaluator should be a mapping from variable names to their values. Or if you have access to the stack frame where rect is defined, you can instead use:\nevaluator = Evaluator.from_frame(frame)\nNow to evaluate some nodes, using evaluator[node]:\nprint(\"rect.width:\", evaluator[the_tuple.elts[0]])\nprint(\"rect:\", evaluator[the_tuple.elts[0].value])\nOutput:\nrect.width: 3\nrect: <__main__.Rectangle object at 0x105b0dd30>\n\nOK, but you could have done the same thing with eval. The useful part is that it will refuse to evaluate the property rect.area because that would trigger unknown code. If we try, it'll raise a CannotEval exception.\nfrom pure_eval import CannotEval\n\ntry:\n    print(\"rect.area:\", evaluator[the_tuple.elts[2]])  # fails\nexcept CannotEval as e:\n    print(e)  # prints CannotEval\nTo find all the expressions that can be evaluated in a tree:\nfor node, value in evaluator.find_expressions(tree):\n    print(ast.dump(node), value)\nOutput:\nAttribute(value=Name(id='rect', ctx=Load()), attr='width', ctx=Load()) 3\nAttribute(value=Name(id='rect', ctx=Load()), attr='height', ctx=Load()) 5\nName(id='rect', ctx=Load()) <__main__.Rectangle object at 0x105568d30>\nName(id='rect', ctx=Load()) <__main__.Rectangle object at 0x105568d30>\nName(id='rect', ctx=Load()) <__main__.Rectangle object at 0x105568d30>\nNote that this includes rect three times, once for each appearance in the source code. Since all these nodes are equivalent, we can group them together:\nfrom pure_eval import group_expressions\n\nfor nodes, values in group_expressions(evaluator.find_expressions(tree)):\n    print(len(nodes), \"nodes with value:\", values)\nOutput:\n1 nodes with value: 3\n1 nodes with value: 5\n3 nodes with value: <__main__.Rectangle object at 0x10d374d30>\n\nIf we want to list all the expressions in a tree, we may want to filter out certain expressions whose values are obvious. For example, suppose we have a function foo:\ndef foo():\n    pass\nIf we refer to foo by its name as usual, then that's not interesting:\nfrom pure_eval import is_expression_interesting\n\nnode = ast.parse('foo').body[0].value\nprint(ast.dump(node))\nprint(is_expression_interesting(node, foo))\nOutput:\nName(id='foo', ctx=Load())\nFalse\nBut if we refer to it by a different name, then it's interesting:\nnode = ast.parse('bar').body[0].value\nprint(ast.dump(node))\nprint(is_expression_interesting(node, foo))\nOutput:\nName(id='bar', ctx=Load())\nTrue\nIn general is_expression_interesting returns False for the following values:\n\nLiterals (e.g. 123, 'abc', [1, 2, 3], {'a': (), 'b': ([1, 2], [3])})\nVariables or attributes whose name is equal to the value's __name__, such as foo above or self.foo if it was a method.\nBuiltins (e.g. len) referred to by their usual name.\n\nTo make things easier, you can combine finding expressions, grouping them, and filtering out the obvious ones with:\nevaluator.interesting_expressions_grouped(root)\nTo get the source code of an AST node, I recommend asttokens.\nHere's a complete example that brings it all together:\nfrom asttokens import ASTTokens\nfrom pure_eval import Evaluator\n\nsource = \"\"\"\nx = 1\nd = {x: 2}\ny = d[x]\n\"\"\"\n\nnames = {}\nexec(source, names)\natok = ASTTokens(source, parse=True)\nfor nodes, value in Evaluator(names).interesting_expressions_grouped(atok.tree):\n    print(atok.get_text(nodes[0]), \"=\", value)\nOutput:\nx = 1\nd = {1: 2}\ny = 2\nd[x] = 2\n\n\n"}, {"name": "ptyprocess", "readme": "\n\n\n\nREADME.rst\n\n\n\n\nLaunch a subprocess in a pseudo terminal (pty), and interact with both the\nprocess and its pty.\nSometimes, piping stdin and stdout is not enough. There might be a password\nprompt that doesn't read from stdin, output that changes when it's going to a\npipe rather than a terminal, or curses-style interfaces that rely on a terminal.\nIf you need to automate these things, running the process in a pseudo terminal\n(pty) is the answer.\nInterface:\nfrom ptyprocess import PtyProcessUnicode\np = PtyProcessUnicode.spawn(['python'])\np.read(20)\np.write('6+6\\n')\np.read(20)\n\n\n", "description": "Launch a subprocess in a pseudo terminal (pty) and interact with it."}, {"name": "psutil", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSummary\nFunding\nSponsors\nSupporters\nContributing\nExample usages\nCPU\nMemory\nDisks\nNetwork\nSensors\nOther system info\nProcess management\nFurther process APIs\nWindows services\nProjects using psutil\nPortings\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\n \n \n \n \n \n\n \n\n     \n\n\n\n\n\n\nHome\u00a0\u00a0\u00a0\n    Install\u00a0\u00a0\u00a0\n    Documentation\u00a0\u00a0\u00a0\n    Download\u00a0\u00a0\u00a0\n    Forum\u00a0\u00a0\u00a0\n    Blog\u00a0\u00a0\u00a0\n    Funding\u00a0\u00a0\u00a0\n    What's new\u00a0\u00a0\u00a0\n\nSummary\npsutil (process and system utilities) is a cross-platform library for\nretrieving information on running processes and system utilization\n(CPU, memory, disks, network, sensors) in Python.\nIt is useful mainly for system monitoring, profiling and limiting process\nresources and management of running processes.\nIt implements many functionalities offered by classic UNIX command line tools\nsuch as ps, top, iotop, lsof, netstat, ifconfig, free and others.\npsutil currently supports the following platforms:\n\nLinux\nWindows\nmacOS\nFreeBSD, OpenBSD, NetBSD\nSun Solaris\nAIX\n\nSupported Python versions are 2.7, 3.6+ and\nPyPy.\n\nFunding\nWhile psutil is free software and will always be, the project would benefit\nimmensely from some funding.\nKeeping up with bug reports and maintenance has become hardly sustainable for\nme alone in terms of time.\nIf you're a company that's making significant use of psutil you can consider\nbecoming a sponsor via GitHub Sponsors,\nOpen Collective or\nPayPal\nand have your logo displayed in here and psutil doc.\n\nSponsors\n\n\n\n\n    \u00a0\u00a0\n    \n\n\n\nadd your logo\nSupporters\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nadd your avatar\nContributing\nSee contributing guidelines.\n\nExample usages\nThis represents pretty much the whole psutil API.\n\nCPU\n>>> import psutil\n>>>\n>>> psutil.cpu_times()\nscputimes(user=3961.46, nice=169.729, system=2150.659, idle=16900.540, iowait=629.59, irq=0.0, softirq=19.42, steal=0.0, guest=0, nice=0.0)\n>>>\n>>> for x in range(3):\n...     psutil.cpu_percent(interval=1)\n...\n4.0\n5.9\n3.8\n>>>\n>>> for x in range(3):\n...     psutil.cpu_percent(interval=1, percpu=True)\n...\n[4.0, 6.9, 3.7, 9.2]\n[7.0, 8.5, 2.4, 2.1]\n[1.2, 9.0, 9.9, 7.2]\n>>>\n>>> for x in range(3):\n...     psutil.cpu_times_percent(interval=1, percpu=False)\n...\nscputimes(user=1.5, nice=0.0, system=0.5, idle=96.5, iowait=1.5, irq=0.0, softirq=0.0, steal=0.0, guest=0.0, guest_nice=0.0)\nscputimes(user=1.0, nice=0.0, system=0.0, idle=99.0, iowait=0.0, irq=0.0, softirq=0.0, steal=0.0, guest=0.0, guest_nice=0.0)\nscputimes(user=2.0, nice=0.0, system=0.0, idle=98.0, iowait=0.0, irq=0.0, softirq=0.0, steal=0.0, guest=0.0, guest_nice=0.0)\n>>>\n>>> psutil.cpu_count()\n4\n>>> psutil.cpu_count(logical=False)\n2\n>>>\n>>> psutil.cpu_stats()\nscpustats(ctx_switches=20455687, interrupts=6598984, soft_interrupts=2134212, syscalls=0)\n>>>\n>>> psutil.cpu_freq()\nscpufreq(current=931.42925, min=800.0, max=3500.0)\n>>>\n>>> psutil.getloadavg()  # also on Windows (emulated)\n(3.14, 3.89, 4.67)\n\nMemory\n>>> psutil.virtual_memory()\nsvmem(total=10367352832, available=6472179712, percent=37.6, used=8186245120, free=2181107712, active=4748992512, inactive=2758115328, buffers=790724608, cached=3500347392, shared=787554304)\n>>> psutil.swap_memory()\nsswap(total=2097147904, used=296128512, free=1801019392, percent=14.1, sin=304193536, sout=677842944)\n>>>\n\nDisks\n>>> psutil.disk_partitions()\n[sdiskpart(device='/dev/sda1', mountpoint='/', fstype='ext4', opts='rw,nosuid', maxfile=255, maxpath=4096),\n sdiskpart(device='/dev/sda2', mountpoint='/home', fstype='ext', opts='rw', maxfile=255, maxpath=4096)]\n>>>\n>>> psutil.disk_usage('/')\nsdiskusage(total=21378641920, used=4809781248, free=15482871808, percent=22.5)\n>>>\n>>> psutil.disk_io_counters(perdisk=False)\nsdiskio(read_count=719566, write_count=1082197, read_bytes=18626220032, write_bytes=24081764352, read_time=5023392, write_time=63199568, read_merged_count=619166, write_merged_count=812396, busy_time=4523412)\n>>>\n\nNetwork\n>>> psutil.net_io_counters(pernic=True)\n{'eth0': netio(bytes_sent=485291293, bytes_recv=6004858642, packets_sent=3251564, packets_recv=4787798, errin=0, errout=0, dropin=0, dropout=0),\n 'lo': netio(bytes_sent=2838627, bytes_recv=2838627, packets_sent=30567, packets_recv=30567, errin=0, errout=0, dropin=0, dropout=0)}\n>>>\n>>> psutil.net_connections(kind='tcp')\n[sconn(fd=115, family=<AddressFamily.AF_INET: 2>, type=<SocketType.SOCK_STREAM: 1>, laddr=addr(ip='10.0.0.1', port=48776), raddr=addr(ip='93.186.135.91', port=80), status='ESTABLISHED', pid=1254),\n sconn(fd=117, family=<AddressFamily.AF_INET: 2>, type=<SocketType.SOCK_STREAM: 1>, laddr=addr(ip='10.0.0.1', port=43761), raddr=addr(ip='72.14.234.100', port=80), status='CLOSING', pid=2987),\n ...]\n>>>\n>>> psutil.net_if_addrs()\n{'lo': [snicaddr(family=<AddressFamily.AF_INET: 2>, address='127.0.0.1', netmask='255.0.0.0', broadcast='127.0.0.1', ptp=None),\n        snicaddr(family=<AddressFamily.AF_INET6: 10>, address='::1', netmask='ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff', broadcast=None, ptp=None),\n        snicaddr(family=<AddressFamily.AF_LINK: 17>, address='00:00:00:00:00:00', netmask=None, broadcast='00:00:00:00:00:00', ptp=None)],\n 'wlan0': [snicaddr(family=<AddressFamily.AF_INET: 2>, address='192.168.1.3', netmask='255.255.255.0', broadcast='192.168.1.255', ptp=None),\n           snicaddr(family=<AddressFamily.AF_INET6: 10>, address='fe80::c685:8ff:fe45:641%wlan0', netmask='ffff:ffff:ffff:ffff::', broadcast=None, ptp=None),\n           snicaddr(family=<AddressFamily.AF_LINK: 17>, address='c4:85:08:45:06:41', netmask=None, broadcast='ff:ff:ff:ff:ff:ff', ptp=None)]}\n>>>\n>>> psutil.net_if_stats()\n{'lo': snicstats(isup=True, duplex=<NicDuplex.NIC_DUPLEX_UNKNOWN: 0>, speed=0, mtu=65536, flags='up,loopback,running'),\n 'wlan0': snicstats(isup=True, duplex=<NicDuplex.NIC_DUPLEX_FULL: 2>, speed=100, mtu=1500, flags='up,broadcast,running,multicast')}\n>>>\n\nSensors\n>>> import psutil\n>>> psutil.sensors_temperatures()\n{'acpitz': [shwtemp(label='', current=47.0, high=103.0, critical=103.0)],\n 'asus': [shwtemp(label='', current=47.0, high=None, critical=None)],\n 'coretemp': [shwtemp(label='Physical id 0', current=52.0, high=100.0, critical=100.0),\n              shwtemp(label='Core 0', current=45.0, high=100.0, critical=100.0)]}\n>>>\n>>> psutil.sensors_fans()\n{'asus': [sfan(label='cpu_fan', current=3200)]}\n>>>\n>>> psutil.sensors_battery()\nsbattery(percent=93, secsleft=16628, power_plugged=False)\n>>>\n\nOther system info\n>>> import psutil\n>>> psutil.users()\n[suser(name='giampaolo', terminal='pts/2', host='localhost', started=1340737536.0, pid=1352),\n suser(name='giampaolo', terminal='pts/3', host='localhost', started=1340737792.0, pid=1788)]\n>>>\n>>> psutil.boot_time()\n1365519115.0\n>>>\n\nProcess management\n>>> import psutil\n>>> psutil.pids()\n[1, 2, 3, 4, 5, 6, 7, 46, 48, 50, 51, 178, 182, 222, 223, 224, 268, 1215,\n 1216, 1220, 1221, 1243, 1244, 1301, 1601, 2237, 2355, 2637, 2774, 3932,\n 4176, 4177, 4185, 4187, 4189, 4225, 4243, 4245, 4263, 4282, 4306, 4311,\n 4312, 4313, 4314, 4337, 4339, 4357, 4358, 4363, 4383, 4395, 4408, 4433,\n 4443, 4445, 4446, 5167, 5234, 5235, 5252, 5318, 5424, 5644, 6987, 7054,\n 7055, 7071]\n>>>\n>>> p = psutil.Process(7055)\n>>> p\npsutil.Process(pid=7055, name='python3', status='running', started='09:04:44')\n>>> p.pid\n7055\n>>> p.name()\n'python3'\n>>> p.exe()\n'/usr/bin/python3'\n>>> p.cwd()\n'/home/giampaolo'\n>>> p.cmdline()\n['/usr/bin/python3', 'main.py']\n>>>\n>>> p.ppid()\n7054\n>>> p.parent()\npsutil.Process(pid=4699, name='bash', status='sleeping', started='09:06:44')\n>>> p.parents()\n[psutil.Process(pid=4699, name='bash', started='09:06:44'),\n psutil.Process(pid=4689, name='gnome-terminal-server', status='sleeping', started='0:06:44'),\n psutil.Process(pid=1, name='systemd', status='sleeping', started='05:56:55')]\n>>> p.children(recursive=True)\n[psutil.Process(pid=29835, name='python3', status='sleeping', started='11:45:38'),\n psutil.Process(pid=29836, name='python3', status='waking', started='11:43:39')]\n>>>\n>>> p.status()\n'running'\n>>> p.create_time()\n1267551141.5019531\n>>> p.terminal()\n'/dev/pts/0'\n>>>\n>>> p.username()\n'giampaolo'\n>>> p.uids()\npuids(real=1000, effective=1000, saved=1000)\n>>> p.gids()\npgids(real=1000, effective=1000, saved=1000)\n>>>\n>>> p.cpu_times()\npcputimes(user=1.02, system=0.31, children_user=0.32, children_system=0.1, iowait=0.0)\n>>> p.cpu_percent(interval=1.0)\n12.1\n>>> p.cpu_affinity()\n[0, 1, 2, 3]\n>>> p.cpu_affinity([0, 1])  # set\n>>> p.cpu_num()\n1\n>>>\n>>> p.memory_info()\npmem(rss=10915840, vms=67608576, shared=3313664, text=2310144, lib=0, data=7262208, dirty=0)\n>>> p.memory_full_info()  # \"real\" USS memory usage (Linux, macOS, Win only)\npfullmem(rss=10199040, vms=52133888, shared=3887104, text=2867200, lib=0, data=5967872, dirty=0, uss=6545408, pss=6872064, swap=0)\n>>> p.memory_percent()\n0.7823\n>>> p.memory_maps()\n[pmmap_grouped(path='/lib/x8664-linux-gnu/libutil-2.15.so', rss=32768, size=2125824, pss=32768, shared_clean=0, shared_dirty=0, private_clean=20480, private_dirty=12288, referenced=32768, anonymous=12288, swap=0),\n pmmap_grouped(path='/lib/x8664-linux-gnu/libc-2.15.so', rss=3821568, size=3842048, pss=3821568, shared_clean=0, shared_dirty=0, private_clean=0, private_dirty=3821568, referenced=3575808, anonymous=3821568, swap=0),\n pmmap_grouped(path='[heap]',  rss=32768, size=139264, pss=32768, shared_clean=0, shared_dirty=0, private_clean=0, private_dirty=32768, referenced=32768, anonymous=32768, swap=0),\n pmmap_grouped(path='[stack]', rss=2465792, size=2494464, pss=2465792, shared_clean=0, shared_dirty=0, private_clean=0, private_dirty=2465792, referenced=2277376, anonymous=2465792, swap=0),\n ...]\n>>>\n>>> p.io_counters()\npio(read_count=478001, write_count=59371, read_bytes=700416, write_bytes=69632, read_chars=456232, write_chars=517543)\n>>>\n>>> p.open_files()\n[popenfile(path='/home/giampaolo/monit.py', fd=3, position=0, mode='r', flags=32768),\n popenfile(path='/var/log/monit.log', fd=4, position=235542, mode='a', flags=33793)]\n>>>\n>>> p.connections(kind='tcp')\n[pconn(fd=115, family=<AddressFamily.AF_INET: 2>, type=<SocketType.SOCK_STREAM: 1>, laddr=addr(ip='10.0.0.1', port=48776), raddr=addr(ip='93.186.135.91', port=80), status='ESTABLISHED'),\n pconn(fd=117, family=<AddressFamily.AF_INET: 2>, type=<SocketType.SOCK_STREAM: 1>, laddr=addr(ip='10.0.0.1', port=43761), raddr=addr(ip='72.14.234.100', port=80), status='CLOSING')]\n>>>\n>>> p.threads()\n[pthread(id=5234, user_time=22.5, system_time=9.2891),\n pthread(id=5237, user_time=0.0707, system_time=1.1)]\n>>>\n>>> p.num_threads()\n4\n>>> p.num_fds()\n8\n>>> p.num_ctx_switches()\npctxsw(voluntary=78, involuntary=19)\n>>>\n>>> p.nice()\n0\n>>> p.nice(10)  # set\n>>>\n>>> p.ionice(psutil.IOPRIO_CLASS_IDLE)  # IO priority (Win and Linux only)\n>>> p.ionice()\npionice(ioclass=<IOPriority.IOPRIO_CLASS_IDLE: 3>, value=0)\n>>>\n>>> p.rlimit(psutil.RLIMIT_NOFILE, (5, 5))  # set resource limits (Linux only)\n>>> p.rlimit(psutil.RLIMIT_NOFILE)\n(5, 5)\n>>>\n>>> p.environ()\n{'LC_PAPER': 'it_IT.UTF-8', 'SHELL': '/bin/bash', 'GREP_OPTIONS': '--color=auto',\n'XDG_CONFIG_DIRS': '/etc/xdg/xdg-ubuntu:/usr/share/upstart/xdg:/etc/xdg',\n ...}\n>>>\n>>> p.as_dict()\n{'status': 'running', 'num_ctx_switches': pctxsw(voluntary=63, involuntary=1), 'pid': 5457, ...}\n>>> p.is_running()\nTrue\n>>> p.suspend()\n>>> p.resume()\n>>>\n>>> p.terminate()\n>>> p.kill()\n>>> p.wait(timeout=3)\n<Exitcode.EX_OK: 0>\n>>>\n>>> psutil.test()\nUSER         PID %CPU %MEM     VSZ     RSS TTY        START    TIME  COMMAND\nroot           1  0.0  0.0   24584    2240            Jun17   00:00  init\nroot           2  0.0  0.0       0       0            Jun17   00:00  kthreadd\n...\ngiampaolo  31475  0.0  0.0   20760    3024 /dev/pts/0 Jun19   00:00  python2.4\ngiampaolo  31721  0.0  2.2  773060  181896            00:04   10:30  chrome\nroot       31763  0.0  0.0       0       0            00:05   00:00  kworker/0:1\n>>>\n\nFurther process APIs\n>>> import psutil\n>>> for proc in psutil.process_iter(['pid', 'name']):\n...     print(proc.info)\n...\n{'pid': 1, 'name': 'systemd'}\n{'pid': 2, 'name': 'kthreadd'}\n{'pid': 3, 'name': 'ksoftirqd/0'}\n...\n>>>\n>>> psutil.pid_exists(3)\nTrue\n>>>\n>>> def on_terminate(proc):\n...     print(\"process {} terminated\".format(proc))\n...\n>>> # waits for multiple processes to terminate\n>>> gone, alive = psutil.wait_procs(procs_list, timeout=3, callback=on_terminate)\n>>>\n\nWindows services\n>>> list(psutil.win_service_iter())\n[<WindowsService(name='AeLookupSvc', display_name='Application Experience') at 38850096>,\n <WindowsService(name='ALG', display_name='Application Layer Gateway Service') at 38850128>,\n <WindowsService(name='APNMCP', display_name='Ask Update Service') at 38850160>,\n <WindowsService(name='AppIDSvc', display_name='Application Identity') at 38850192>,\n ...]\n>>> s = psutil.win_service_get('alg')\n>>> s.as_dict()\n{'binpath': 'C:\\\\Windows\\\\System32\\\\alg.exe',\n 'description': 'Provides support for 3rd party protocol plug-ins for Internet Connection Sharing',\n 'display_name': 'Application Layer Gateway Service',\n 'name': 'alg',\n 'pid': None,\n 'start_type': 'manual',\n 'status': 'stopped',\n 'username': 'NT AUTHORITY\\\\LocalService'}\n\nProjects using psutil\nHere's some I find particularly interesting:\n\nhttps://github.com/google/grr\nhttps://github.com/facebook/osquery/\nhttps://github.com/nicolargo/glances\nhttps://github.com/aristocratos/bpytop\nhttps://github.com/Jahaja/psdash\nhttps://github.com/ajenti/ajenti\nhttps://github.com/home-assistant/home-assistant/\n\n\nPortings\n\nGo: https://github.com/shirou/gopsutil\nC: https://github.com/hamon-in/cpslib\nRust: https://github.com/rust-psutil/rust-psutil\nNim: https://github.com/johnscillieri/psutil-nim\n\n\n\n", "description": "Cross-platform lib for process and system monitoring in Python."}, {"name": "pronouncing", "readme": "\n\n\n\n\n\n\n\n\n\n\n\npronouncing\nInstallation\nLicense\nAcknowledgements\n\n\n\n\n\nREADME.rst\n\n\n\n\npronouncing\n\n\n\n\n\nPronouncing is a simple interface for the CMU Pronouncing Dictionary. It's easy\nto use and has no external dependencies. For example, here's how to find rhymes\nfor a given word:\n>>> import pronouncing\n>>> pronouncing.rhymes(\"climbing\")\n['diming', 'liming', 'priming', 'rhyming', 'timing']\n\nRead the documentation here: https://pronouncing.readthedocs.org.\nI made Pronouncing because I wanted to be able to use the CMU Pronouncing\nDictionary in my projects (and teach other people how to use it) without having\nto install the grand behemoth that is NLTK.\n\nInstallation\nInstall with pip like so:\npip install pronouncing\n\nYou can also download the source code and install manually:\npython setup.py install\n\n\nLicense\nThe Python code in this module is distributed with a BSD license.\n\nAcknowledgements\nThis package was originally developed as part of my Spring 2015 research\nfellowship at ITP. Thank you to the program and\nits students for their interest and support!\n\n\n", "description": "Simple interface for the CMU Pronouncing Dictionary to get phonetic transcriptions of English words."}, {"name": "prompt-toolkit", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPython Prompt Toolkit\nGallery\nprompt_toolkit features\nInstallation\nAbout Windows support\nGetting started\nPhilosophy\nProjects using prompt_toolkit\nSpecial thanks to\n\n\n\n\n\nREADME.rst\n\n\n\n\nPython Prompt Toolkit\n \n  \n \n\nprompt_toolkit is a library for building powerful interactive command line applications in Python.\nRead the documentation on readthedocs.\n\nGallery\nptpython is an interactive\nPython Shell, build on top of prompt_toolkit.\n\nMore examples\n\nprompt_toolkit features\nprompt_toolkit could be a replacement for GNU readline, but it can be much\nmore than that.\nSome features:\n\nPure Python.\nSyntax highlighting of the input while typing. (For instance, with a Pygments lexer.)\nMulti-line input editing.\nAdvanced code completion.\nBoth Emacs and Vi key bindings. (Similar to readline.)\nEven some advanced Vi functionality, like named registers and digraphs.\nReverse and forward incremental search.\nWorks well with Unicode double width characters. (Chinese input.)\nSelecting text for copy/paste. (Both Emacs and Vi style.)\nSupport for bracketed paste.\nMouse support for cursor positioning and scrolling.\nAuto suggestions. (Like fish shell.)\nMultiple input buffers.\nNo global state.\nLightweight, the only dependencies are Pygments and wcwidth.\nRuns on Linux, OS X, FreeBSD, OpenBSD and Windows systems.\nAnd much more...\n\nFeel free to create tickets for bugs and feature requests, and create pull\nrequests if you have nice patches that you would like to share with others.\n\nInstallation\npip install prompt_toolkit\n\nFor Conda, do:\nconda install -c https://conda.anaconda.org/conda-forge prompt_toolkit\n\n\nAbout Windows support\nprompt_toolkit is cross platform, and everything that you build on top\nshould run fine on both Unix and Windows systems. Windows support is best on\nrecent Windows 10 builds, for which the command line window supports vt100\nescape sequences. (If not supported, we fall back to using Win32 APIs for color\nand cursor movements).\nIt's worth noting that the implementation is a \"best effort of what is\npossible\". Both Unix and Windows terminals have their limitations. But in\ngeneral, the Unix experience will still be a little better.\nFor Windows, it's recommended to use either cmder or conemu.\n\nGetting started\nThe most simple example of the library would look like this:\nfrom prompt_toolkit import prompt\n\nif __name__ == '__main__':\n    answer = prompt('Give me some input: ')\n    print('You said: %s' % answer)\nFor more complex examples, have a look in the examples directory. All\nexamples are chosen to demonstrate only one thing. Also, don't be afraid to\nlook at the source code. The implementation of the prompt function could be\na good start.\n\nPhilosophy\nThe source code of prompt_toolkit should be readable, concise and\nefficient. We prefer short functions focusing each on one task and for which\nthe input and output types are clearly specified. We mostly prefer composition\nover inheritance, because inheritance can result in too much functionality in\nthe same object. We prefer immutable objects where possible (objects don't\nchange after initialization). Reusability is important. We absolutely refrain\nfrom having a changing global state, it should be possible to have multiple\nindependent instances of the same code in the same process. The architecture\nshould be layered: the lower levels operate on primitive operations and data\nstructures giving -- when correctly combined -- all the possible flexibility;\nwhile at the higher level, there should be a simpler API, ready-to-use and\nsufficient for most use cases. Thinking about algorithms and efficiency is\nimportant, but avoid premature optimization.\n\nProjects using prompt_toolkit\n\nSpecial thanks to\n\nPygments: Syntax highlighter.\nwcwidth: Determine columns needed for a wide characters.\n\n\n\n"}, {"name": "prometheus-client", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPrometheus Python Client\nThree Step Demo\nInstallation\nInstrumenting\nCounter\nGauge\nSummary\nHistogram\nInfo\nEnum\nLabels\nExemplars\nDisabling _created metrics\nProcess Collector\nPlatform Collector\nDisabling Default Collector metrics\nExporting\nHTTP\nTwisted\nWSGI\nASGI\nFlask\nFastAPI + Gunicorn\nNode exporter textfile collector\nExporting to a Pushgateway\nHandlers for authentication\nBridges\nGraphite\nCustom Collectors\nMultiprocess Mode (E.g. Gunicorn)\nParser\nLinks\n\n\n\n\n\nREADME.md\n\n\n\n\nPrometheus Python Client\nThe official Python client for Prometheus.\nThree Step Demo\nOne: Install the client:\npip install prometheus-client\n\nTwo: Paste the following into a Python interpreter:\nfrom prometheus_client import start_http_server, Summary\nimport random\nimport time\n\n# Create a metric to track time spent and requests made.\nREQUEST_TIME = Summary('request_processing_seconds', 'Time spent processing request')\n\n# Decorate function with metric.\n@REQUEST_TIME.time()\ndef process_request(t):\n    \"\"\"A dummy function that takes some time.\"\"\"\n    time.sleep(t)\n\nif __name__ == '__main__':\n    # Start up the server to expose the metrics.\n    start_http_server(8000)\n    # Generate some requests.\n    while True:\n        process_request(random.random())\nThree: Visit http://localhost:8000/ to view the metrics.\nFrom one easy to use decorator you get:\n\nrequest_processing_seconds_count: Number of times this function was called.\nrequest_processing_seconds_sum: Total amount of time spent in this function.\n\nPrometheus's rate function allows calculation of both requests per second,\nand latency over time from this data.\nIn addition if you're on Linux the process metrics expose CPU, memory and\nother information about the process for free!\nInstallation\npip install prometheus-client\n\nThis package can be found on\nPyPI.\nInstrumenting\nFour types of metric are offered: Counter, Gauge, Summary and Histogram.\nSee the documentation on metric types\nand instrumentation best practices\non how to use them.\nCounter\nCounters go up, and reset when the process restarts.\nfrom prometheus_client import Counter\nc = Counter('my_failures', 'Description of counter')\nc.inc()     # Increment by 1\nc.inc(1.6)  # Increment by given value\nIf there is a suffix of _total on the metric name, it will be removed. When\nexposing the time series for counter, a _total suffix will be added. This is\nfor compatibility between OpenMetrics and the Prometheus text format, as OpenMetrics\nrequires the _total suffix.\nThere are utilities to count exceptions raised:\n@c.count_exceptions()\ndef f():\n  pass\n\nwith c.count_exceptions():\n  pass\n\n# Count only one type of exception\nwith c.count_exceptions(ValueError):\n  pass\nGauge\nGauges can go up and down.\nfrom prometheus_client import Gauge\ng = Gauge('my_inprogress_requests', 'Description of gauge')\ng.inc()      # Increment by 1\ng.dec(10)    # Decrement by given value\ng.set(4.2)   # Set to a given value\nThere are utilities for common use cases:\ng.set_to_current_time()   # Set to current unixtime\n\n# Increment when entered, decrement when exited.\n@g.track_inprogress()\ndef f():\n  pass\n\nwith g.track_inprogress():\n  pass\nA Gauge can also take its value from a callback:\nd = Gauge('data_objects', 'Number of objects')\nmy_dict = {}\nd.set_function(lambda: len(my_dict))\nSummary\nSummaries track the size and number of events.\nfrom prometheus_client import Summary\ns = Summary('request_latency_seconds', 'Description of summary')\ns.observe(4.7)    # Observe 4.7 (seconds in this case)\nThere are utilities for timing code:\n@s.time()\ndef f():\n  pass\n\nwith s.time():\n  pass\nThe Python client doesn't store or expose quantile information at this time.\nHistogram\nHistograms track the size and number of events in buckets.\nThis allows for aggregatable calculation of quantiles.\nfrom prometheus_client import Histogram\nh = Histogram('request_latency_seconds', 'Description of histogram')\nh.observe(4.7)    # Observe 4.7 (seconds in this case)\nThe default buckets are intended to cover a typical web/rpc request from milliseconds to seconds.\nThey can be overridden by passing buckets keyword argument to Histogram.\nThere are utilities for timing code:\n@h.time()\ndef f():\n  pass\n\nwith h.time():\n  pass\nInfo\nInfo tracks key-value information, usually about a whole target.\nfrom prometheus_client import Info\ni = Info('my_build_version', 'Description of info')\ni.info({'version': '1.2.3', 'buildhost': 'foo@bar'})\nEnum\nEnum tracks which of a set of states something is currently in.\nfrom prometheus_client import Enum\ne = Enum('my_task_state', 'Description of enum',\n        states=['starting', 'running', 'stopped'])\ne.state('running')\nLabels\nAll metrics can have labels, allowing grouping of related time series.\nSee the best practices on naming\nand labels.\nTaking a counter as an example:\nfrom prometheus_client import Counter\nc = Counter('my_requests_total', 'HTTP Failures', ['method', 'endpoint'])\nc.labels('get', '/').inc()\nc.labels('post', '/submit').inc()\nLabels can also be passed as keyword-arguments:\nfrom prometheus_client import Counter\nc = Counter('my_requests_total', 'HTTP Failures', ['method', 'endpoint'])\nc.labels(method='get', endpoint='/').inc()\nc.labels(method='post', endpoint='/submit').inc()\nMetrics with labels are not initialized when declared, because the client can't\nknow what values the label can have. It is recommended to initialize the label\nvalues by calling the .labels() method alone:\nfrom prometheus_client import Counter\nc = Counter('my_requests_total', 'HTTP Failures', ['method', 'endpoint'])\nc.labels('get', '/')\nc.labels('post', '/submit')\nExemplars\nExemplars can be added to counter and histogram metrics. Exemplars can be\nspecified by passing a dict of label value pairs to be exposed as the exemplar.\nFor example with a counter:\nfrom prometheus_client import Counter\nc = Counter('my_requests_total', 'HTTP Failures', ['method', 'endpoint'])\nc.labels('get', '/').inc(exemplar={'trace_id': 'abc123'})\nc.labels('post', '/submit').inc(1.0, {'trace_id': 'def456'})\nAnd with a histogram:\nfrom prometheus_client import Histogram\nh = Histogram('request_latency_seconds', 'Description of histogram')\nh.observe(4.7, {'trace_id': 'abc123'})\nExemplars are only rendered in the OpenMetrics exposition format. If using the\nHTTP server or apps in this library, content negotiation can be used to specify\nOpenMetrics (which is done by default in Prometheus). Otherwise it will be\nnecessary to use generate_latest from\nprometheus_client.openmetrics.exposition to view exemplars.\nTo view exemplars in Prometheus it is also necessary to enable the the\nexemplar-storage feature flag:\n--enable-feature=exemplar-storage\n\nAdditional information is available in the Prometheus\ndocumentation.\nDisabling _created metrics\nBy default counters, histograms, and summaries export an additional series\nsuffixed with _created and a value of the unix timestamp for when the metric\nwas created. If this information is not helpful, it can be disabled by setting\nthe environment variable PROMETHEUS_DISABLE_CREATED_SERIES=True.\nProcess Collector\nThe Python client automatically exports metrics about process CPU usage, RAM,\nfile descriptors and start time. These all have the prefix process, and\nare only currently available on Linux.\nThe namespace and pid constructor arguments allows for exporting metrics about\nother processes, for example:\nProcessCollector(namespace='mydaemon', pid=lambda: open('/var/run/daemon.pid').read())\n\nPlatform Collector\nThe client also automatically exports some metadata about Python. If using Jython,\nmetadata about the JVM in use is also included. This information is available as\nlabels on the python_info metric. The value of the metric is 1, since it is the\nlabels that carry information.\nDisabling Default Collector metrics\nBy default the collected process, gc, and platform collector metrics are exported.\nIf this information is not helpful, it can be disabled using the following:\nimport prometheus_client\n\nprometheus_client.REGISTRY.unregister(prometheus_client.GC_COLLECTOR)\nprometheus_client.REGISTRY.unregister(prometheus_client.PLATFORM_COLLECTOR)\nprometheus_client.REGISTRY.unregister(prometheus_client.PROCESS_COLLECTOR)\nExporting\nThere are several options for exporting metrics.\nHTTP\nMetrics are usually exposed over HTTP, to be read by the Prometheus server.\nThe easiest way to do this is via start_http_server, which will start a HTTP\nserver in a daemon thread on the given port:\nfrom prometheus_client import start_http_server\n\nstart_http_server(8000)\nVisit http://localhost:8000/ to view the metrics.\nTo add Prometheus exposition to an existing HTTP server, see the MetricsHandler class\nwhich provides a BaseHTTPRequestHandler. It also serves as a simple example of how\nto write a custom endpoint.\nTwisted\nTo use prometheus with twisted, there is MetricsResource which exposes metrics as a twisted resource.\nfrom prometheus_client.twisted import MetricsResource\nfrom twisted.web.server import Site\nfrom twisted.web.resource import Resource\nfrom twisted.internet import reactor\n\nroot = Resource()\nroot.putChild(b'metrics', MetricsResource())\n\nfactory = Site(root)\nreactor.listenTCP(8000, factory)\nreactor.run()\nWSGI\nTo use Prometheus with WSGI, there is\nmake_wsgi_app which creates a WSGI application.\nfrom prometheus_client import make_wsgi_app\nfrom wsgiref.simple_server import make_server\n\napp = make_wsgi_app()\nhttpd = make_server('', 8000, app)\nhttpd.serve_forever()\nSuch an application can be useful when integrating Prometheus metrics with WSGI\napps.\nThe method start_wsgi_server can be used to serve the metrics through the\nWSGI reference implementation in a new thread.\nfrom prometheus_client import start_wsgi_server\n\nstart_wsgi_server(8000)\nBy default, the WSGI application will respect Accept-Encoding:gzip headers used by Prometheus\nand compress the response if such a header is present. This behaviour can be disabled by passing\ndisable_compression=True when creating the app, like this:\napp = make_wsgi_app(disable_compression=True)\nASGI\nTo use Prometheus with ASGI, there is\nmake_asgi_app which creates an ASGI application.\nfrom prometheus_client import make_asgi_app\n\napp = make_asgi_app()\nSuch an application can be useful when integrating Prometheus metrics with ASGI\napps.\nBy default, the WSGI application will respect Accept-Encoding:gzip headers used by Prometheus\nand compress the response if such a header is present. This behaviour can be disabled by passing\ndisable_compression=True when creating the app, like this:\napp = make_asgi_app(disable_compression=True)\nFlask\nTo use Prometheus with Flask we need to serve metrics through a Prometheus WSGI application. This can be achieved using Flask's application dispatching. Below is a working example.\nSave the snippet below in a myapp.py file\nfrom flask import Flask\nfrom werkzeug.middleware.dispatcher import DispatcherMiddleware\nfrom prometheus_client import make_wsgi_app\n\n# Create my app\napp = Flask(__name__)\n\n# Add prometheus wsgi middleware to route /metrics requests\napp.wsgi_app = DispatcherMiddleware(app.wsgi_app, {\n    '/metrics': make_wsgi_app()\n})\nRun the example web application like this\n# Install uwsgi if you do not have it\npip install uwsgi\nuwsgi --http 127.0.0.1:8000 --wsgi-file myapp.py --callable app\nVisit http://localhost:8000/metrics to see the metrics\nFastAPI + Gunicorn\nTo use Prometheus with FastAPI and Gunicorn we need to serve metrics through a Prometheus ASGI application.\nSave the snippet below in a myapp.py file\nfrom fastapi import FastAPI\nfrom prometheus_client import make_asgi_app\n\n# Create app\napp = FastAPI(debug=False)\n\n# Add prometheus asgi middleware to route /metrics requests\nmetrics_app = make_asgi_app()\napp.mount(\"/metrics\", metrics_app)\nFor Multiprocessing support, use this modified code snippet. Full multiprocessing instructions are provided here.\nfrom fastapi import FastAPI\nfrom prometheus_client import make_asgi_app\n\napp = FastAPI(debug=False)\n\n# Using multiprocess collector for registry\ndef make_metrics_app():\n    registry = CollectorRegistry()\n    multiprocess.MultiProcessCollector(registry)\n    return make_asgi_app(registry=registry)\n\n\nmetrics_app = make_metrics_app()\napp.mount(\"/metrics\", metrics_app)\nRun the example web application like this\n# Install gunicorn if you do not have it\npip install gunicorn\n# If using multiple workers, add `--workers n` parameter to the line below\ngunicorn -b 127.0.0.1:8000 myapp:app -k uvicorn.workers.UvicornWorker\nVisit http://localhost:8000/metrics to see the metrics\nNode exporter textfile collector\nThe textfile collector\nallows machine-level statistics to be exported out via the Node exporter.\nThis is useful for monitoring cronjobs, or for writing cronjobs to expose metrics\nabout a machine system that the Node exporter does not support or would not make sense\nto perform at every scrape (for example, anything involving subprocesses).\nfrom prometheus_client import CollectorRegistry, Gauge, write_to_textfile\n\nregistry = CollectorRegistry()\ng = Gauge('raid_status', '1 if raid array is okay', registry=registry)\ng.set(1)\nwrite_to_textfile('/configured/textfile/path/raid.prom', registry)\nA separate registry is used, as the default registry may contain other metrics\nsuch as those from the Process Collector.\nExporting to a Pushgateway\nThe Pushgateway\nallows ephemeral and batch jobs to expose their metrics to Prometheus.\nfrom prometheus_client import CollectorRegistry, Gauge, push_to_gateway\n\nregistry = CollectorRegistry()\ng = Gauge('job_last_success_unixtime', 'Last time a batch job successfully finished', registry=registry)\ng.set_to_current_time()\npush_to_gateway('localhost:9091', job='batchA', registry=registry)\nA separate registry is used, as the default registry may contain other metrics\nsuch as those from the Process Collector.\nPushgateway functions take a grouping key. push_to_gateway replaces metrics\nwith the same grouping key, pushadd_to_gateway only replaces metrics with the\nsame name and grouping key and delete_from_gateway deletes metrics with the\ngiven job and grouping key. See the\nPushgateway documentation\nfor more information.\ninstance_ip_grouping_key returns a grouping key with the instance label set\nto the host's IP address.\nHandlers for authentication\nIf the push gateway you are connecting to is protected with HTTP Basic Auth,\nyou can use a special handler to set the Authorization header.\nfrom prometheus_client import CollectorRegistry, Gauge, push_to_gateway\nfrom prometheus_client.exposition import basic_auth_handler\n\ndef my_auth_handler(url, method, timeout, headers, data):\n    username = 'foobar'\n    password = 'secret123'\n    return basic_auth_handler(url, method, timeout, headers, data, username, password)\nregistry = CollectorRegistry()\ng = Gauge('job_last_success_unixtime', 'Last time a batch job successfully finished', registry=registry)\ng.set_to_current_time()\npush_to_gateway('localhost:9091', job='batchA', registry=registry, handler=my_auth_handler)\nTLS Auth is also supported when using the push gateway with a special handler.\nfrom prometheus_client import CollectorRegistry, Gauge, push_to_gateway\nfrom prometheus_client.exposition import tls_auth_handler\n\n\ndef my_auth_handler(url, method, timeout, headers, data):\n    certfile = 'client-crt.pem'\n    keyfile = 'client-key.pem'\n    return tls_auth_handler(url, method, timeout, headers, data, certfile, keyfile)\n\nregistry = CollectorRegistry()\ng = Gauge('job_last_success_unixtime', 'Last time a batch job successfully finished', registry=registry)\ng.set_to_current_time()\npush_to_gateway('localhost:9091', job='batchA', registry=registry, handler=my_auth_handler)\nBridges\nIt is also possible to expose metrics to systems other than Prometheus.\nThis allows you to take advantage of Prometheus instrumentation even\nif you are not quite ready to fully transition to Prometheus yet.\nGraphite\nMetrics are pushed over TCP in the Graphite plaintext format.\nfrom prometheus_client.bridge.graphite import GraphiteBridge\n\ngb = GraphiteBridge(('graphite.your.org', 2003))\n# Push once.\ngb.push()\n# Push every 10 seconds in a daemon thread.\ngb.start(10.0)\nGraphite tags are also supported.\nfrom prometheus_client.bridge.graphite import GraphiteBridge\n\ngb = GraphiteBridge(('graphite.your.org', 2003), tags=True)\nc = Counter('my_requests_total', 'HTTP Failures', ['method', 'endpoint'])\nc.labels('get', '/').inc()\ngb.push()\nCustom Collectors\nSometimes it is not possible to directly instrument code, as it is not\nin your control. This requires you to proxy metrics from other systems.\nTo do so you need to create a custom collector, for example:\nfrom prometheus_client.core import GaugeMetricFamily, CounterMetricFamily, REGISTRY\n\nclass CustomCollector(object):\n    def collect(self):\n        yield GaugeMetricFamily('my_gauge', 'Help text', value=7)\n        c = CounterMetricFamily('my_counter_total', 'Help text', labels=['foo'])\n        c.add_metric(['bar'], 1.7)\n        c.add_metric(['baz'], 3.8)\n        yield c\n\nREGISTRY.register(CustomCollector())\nSummaryMetricFamily, HistogramMetricFamily and InfoMetricFamily work similarly.\nA collector may implement a describe method which returns metrics in the same\nformat as collect (though you don't have to include the samples). This is\nused to predetermine the names of time series a CollectorRegistry exposes and\nthus to detect collisions and duplicate registrations.\nUsually custom collectors do not have to implement describe. If describe is\nnot implemented and the CollectorRegistry was created with auto_describe=True\n(which is the case for the default registry) then collect will be called at\nregistration time instead of describe. If this could cause problems, either\nimplement a proper describe, or if that's not practical have describe\nreturn an empty list.\nMultiprocess Mode (E.g. Gunicorn)\nPrometheus client libraries presume a threaded model, where metrics are shared\nacross workers. This doesn't work so well for languages such as Python where\nit's common to have processes rather than threads to handle large workloads.\nTo handle this the client library can be put in multiprocess mode.\nThis comes with a number of limitations:\n\nRegistries can not be used as normal, all instantiated metrics are exported\n\nRegistering metrics to a registry later used by a MultiProcessCollector\nmay cause duplicate metrics to be exported\n\n\nCustom collectors do not work (e.g. cpu and memory metrics)\nInfo and Enum metrics do not work\nThe pushgateway cannot be used\nGauges cannot use the pid label\nExemplars are not supported\n\nThere's several steps to getting this working:\n1. Deployment:\nThe PROMETHEUS_MULTIPROC_DIR environment variable must be set to a directory\nthat the client library can use for metrics. This directory must be wiped\nbetween process/Gunicorn runs (before startup is recommended).\nThis environment variable should be set from a start-up shell script,\nand not directly from Python (otherwise it may not propagate to child processes).\n2. Metrics collector:\nThe application must initialize a new CollectorRegistry, and store the\nmulti-process collector inside. It is a best practice to create this registry\ninside the context of a request to avoid metrics registering themselves to a\ncollector used by a MultiProcessCollector. If a registry with metrics\nregistered is used by a MultiProcessCollector duplicate metrics may be\nexported, one for multiprocess, and one for the process serving the request.\nfrom prometheus_client import multiprocess\nfrom prometheus_client import generate_latest, CollectorRegistry, CONTENT_TYPE_LATEST, Counter\n\nMY_COUNTER = Counter('my_counter', 'Description of my counter')\n\n# Expose metrics.\ndef app(environ, start_response):\n    registry = CollectorRegistry()\n    multiprocess.MultiProcessCollector(registry)\n    data = generate_latest(registry)\n    status = '200 OK'\n    response_headers = [\n        ('Content-type', CONTENT_TYPE_LATEST),\n        ('Content-Length', str(len(data)))\n    ]\n    start_response(status, response_headers)\n    return iter([data])\n3. Gunicorn configuration:\nThe gunicorn configuration file needs to include the following function:\nfrom prometheus_client import multiprocess\n\ndef child_exit(server, worker):\n    multiprocess.mark_process_dead(worker.pid)\n4. Metrics tuning (Gauge):\nWhen Gauges are used in multiprocess applications,\nyou must decide how to handle the metrics reported by each process.\nGauges have several modes they can run in, which can be selected with the multiprocess_mode parameter.\n\n'all': Default. Return a timeseries per process (alive or dead), labelled by the process's pid (the label is added internally).\n'min': Return a single timeseries that is the minimum of the values of all processes (alive or dead).\n'max': Return a single timeseries that is the maximum of the values of all processes (alive or dead).\n'sum': Return a single timeseries that is the sum of the values of all processes (alive or dead).\n\nPrepend 'live' to the beginning of the mode to return the same result but only considering living processes\n(e.g., 'liveall, 'livesum', 'livemax', 'livemin').\nfrom prometheus_client import Gauge\n\n# Example gauge\nIN_PROGRESS = Gauge(\"inprogress_requests\", \"help\", multiprocess_mode='livesum')\nParser\nThe Python client supports parsing the Prometheus text format.\nThis is intended for advanced use cases where you have servers\nexposing Prometheus metrics and need to get them into some other\nsystem.\nfrom prometheus_client.parser import text_string_to_metric_families\nfor family in text_string_to_metric_families(u\"my_gauge 1.0\\n\"):\n  for sample in family.samples:\n    print(\"Name: {0} Labels: {1} Value: {2}\".format(*sample))\nLinks\n\nReleases: The releases page shows the history of the project and acts as a changelog.\nPyPI\n\n\n\n"}, {"name": "proglog", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nUsage\nInstallation\nLicense = MIT\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\n\n\nProglog is a progress logging system for Python. It allows to build complex\nlibraries while giving your  users control over logs, callbacks and progress bars.\nWhat problems does it solve ?\nLibraries like tqdm or progress are great for quickly adding progress bars to your scripts, but become difficult to manage when building larger projects.\nFor instance, you will need to write different code depending on whether you are displaying the progress in a console, a Jupyter notebook, or a webpage.\nSometimes a single program may have to handle many logs and progress bars coming from different subprograms and libraries, at which case you may want to let the final user decide which progress bars they want to display or to mute, even when these progress bars are handled deep down in your program.\nFor instance if your program 1 calls a program 2 and program 3 (possibly from other libraries), you may want to be able to silence the progress bars of routine 2, or to only show the progress bars of routine 1. As long as all routines use Proglog, this will be easy to do.\n\n\nYou may also want to log more than just progress bars, have specific callback fonctions, print the logs in human-readable format... Proglog provides all these features.\n\nUsage\nAssume that you are writing a library called my_library in which you define a routine as follows:\nimport time  # for simulating computing time\nfrom proglog import default_bar_logger\n\ndef my_routine(iterations=10, logger='bar'):\n    \"\"\"Run several loops to showcase Proglog.\"\"\"\n    logger = default_bar_logger(logger)  # shorthand to generate a bar logger\n    for i in logger.iter_bar(iteration=range(iterations)):\n        for j in logger.iter_bar(animal=['dog', 'cat', 'rat', 'duck']):\n            time.sleep(0.1)  # simulate some computing time\nNow when the library users run a program in the console, they will get a console progress bar:\nfrom my_library import my_routine\nmy_routine()\n\n\nIf the users run the routine inside a Jupyter/IPython notebook, they only need to write proglog.notebook() at the beginning of the notebook to obtain HTML progress bars:\nimport proglog\nproglog.notebook()\n\nfrom my_library import my_routine\nmy_routine()\n\n\nIf the user wishes to turn off all progress bars:\nfrom my_library import my_routine\nmy_routine(logger=None)\nIf the user is running the routine on a web server and would want to attach the\ndata to an asynchronous Python-RQ job, all they need is yet a different logger:\nfrom proglog import RqWorkerBarLogger\nfrom my_library import my_routine\n\nlogger = RqWorkerBarLogger(job=some_python_rq_job)\nmy_routine(logger=logger)\nThis allows to then display progress bars on the website such as these (see the EGF CUBA project for an example of website using Proglog):\n\n\nThe user may also want a custom progress logger which selectively ignores the animals progress bar, and only updates its bars every second (to save computing time):\nfrom proglog import TqdmProgressBarLogger\nfrom my_library import my_routine\n\nlogger = TqdmProgressBarLogger(ignored_bars=('animal',),\n                               min_time_interval=1.0)\nmy_routine(logger=logger)\nProglog loggers can be used for much more than just progress bars. They can in fact store any kind of data with a simple API:\nlogger(message='Now running the main program, be patient...')\nlogger(current_animal='cat')\nlogger(last_number_tried=1235)\nFor more complex customization, such as adding callback functions which will be executed every time the logger's state is updated, simply create a new logger class:\nfrom proglog import ProgressBarLogger\nfrom my_library import my_routine\n\nclass MyBarLogger(ProgressBarLogger):\n\n    def callback(self, **changes):\n        # Every time the logger is updated, this function is called with\n        # the `changes` dictionnary of the form `parameter: new value`.\n\n        for (parameter, new_value) in changes.items():\n            print ('Parameter %s is now %s' % (parameter, value))\n\nlogger = MyBarLogger()\nmy_routine(logger=logger)\nWhen writing libraries which all log progress and may depend on each other, simply pass the Proglog logger from one program to its dependencies, to obtain one logger keeping track of all progress across libraries at once:\n\n\nNote that this implies that not two libraries use the same variables or loop names, which can be avoided by attributing prefixes to these names:\nfor i in logger.iter_bar(iteration=range(iterations), bar_prefix='libraryname_'):\n    ...\n\nInstallation\nYou can install Proglog through PIP:\npip install proglog\nAlternatively, you can unzip the sources in a folder and type:\npython setup.py install\nTo use the tqdm notebook-style progress bars you need to install iwidgets:\npip install ipywidgets\nThis should automatically enable it; for older versions try:\njupyter nbextension enable --py --sys-prefix widgetsnbextension\n\nLicense = MIT\nProglog is an open-source software originally written at the Edinburgh Genome Foundry by Zulko\nand released on Github under\nthe MIT license (Copyright 2017 Edinburgh Genome Foundry).\nProglog was not written by loggology experts, it just works with our projects and we use it a lot.\nEveryone is welcome to contribute if you find bugs or limitations !\n\n\n", "description": "Progress logging system for Python to build complex libraries with custom logs and progress bars."}, {"name": "priority", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nPriority: A HTTP/2 Priority Implementation\nUsing Priority\nIterating The Tree\nUpdating The Tree\nRemoving Streams\nLicense\nAuthors\n\n\n\n\n\nREADME.rst\n\n\n\n\nPriority: A HTTP/2 Priority Implementation\n\n\n\n\n\n\n\n\nPriority is a pure-Python implementation of the priority logic for HTTP/2, set\nout in RFC 7540 Section 5.3 (Stream Priority). This logic allows for clients\nto express a preference for how the server allocates its (limited) resources to\nthe many outstanding HTTP requests that may be running over a single HTTP/2\nconnection.\nSpecifically, this Python implementation uses a variant of the implementation\nused in the excellent H2O project. This original implementation is also the\ninspiration for nghttp2's priority implementation, and generally produces a\nvery clean and even priority stream. The only notable changes from H2O's\nimplementation are small modifications to allow the priority implementation to\nwork cleanly as a separate implementation, rather than being embedded in a\nHTTP/2 stack directly.\nWhile priority information in HTTP/2 is only a suggestion, rather than an\nenforceable constraint, where possible servers should respect the priority\nrequests of their clients.\n\nUsing Priority\nPriority has a simple API. Streams are inserted into the tree: when they are\ninserted, they may optionally have a weight, depend on another stream, or\nbecome an exclusive dependent of another stream.\n>>> p = priority.PriorityTree()\n>>> p.insert_stream(stream_id=1)\n>>> p.insert_stream(stream_id=3)\n>>> p.insert_stream(stream_id=5, depends_on=1)\n>>> p.insert_stream(stream_id=7, weight=32)\n>>> p.insert_stream(stream_id=9, depends_on=7, weight=8)\n>>> p.insert_stream(stream_id=11, depends_on=7, exclusive=True)\nOnce streams are inserted, the stream priorities can be requested. This allows\nthe server to make decisions about how to allocate resources.\n\nIterating The Tree\nThe tree in this algorithm acts as a gate. Its goal is to allow one stream\n\"through\" at a time, in such a manner that all the active streams are served as\nevenly as possible in proportion to their weights.\nThis is handled in Priority by iterating over the tree. The tree itself is an\niterator, and each time it is advanced it will yield a stream ID. This is the\nID of the stream that should next send data.\nThis looks like this:\n>>> for stream_id in p:\n...     send_data(stream_id)\nIf each stream only sends when it is 'ungated' by this mechanism, the server\nwill automatically be emitting stream data in conformance to RFC 7540.\n\nUpdating The Tree\nIf for any reason a stream is unable to proceed (for example, it is blocked on\nHTTP/2 flow control, or it is waiting for more data from another service), that\nstream is blocked. The PriorityTree should be informed that the stream is\nblocked so that other dependent streams get a chance to proceed. This can be\ndone by calling the block method of the tree with the stream ID that is\ncurrently unable to proceed. This will automatically update the tree, and it\nwill adjust on the fly to correctly allow any streams that were dependent on\nthe blocked one to progress.\nFor example:\n>>> for stream_id in p:\n...     send_data(stream_id)\n...     if blocked(stream_id):\n...         p.block(stream_id)\nWhen a stream goes from being blocked to being unblocked, call the unblock\nmethod to place it back into the sequence. Both the block and unblock\nmethods are idempotent and safe to call repeatedly.\nAdditionally, the priority of a stream may change. When it does, the\nreprioritize method can be used to update the tree in the wake of that\nchange. reprioritize has the same signature as insert_stream, but\napplies only to streams already in the tree.\n\nRemoving Streams\nA stream can be entirely removed from the tree by calling remove_stream.\nNote that this is not idempotent. Further, calling remove_stream and then\nre-adding it may cause a substantial change in the shape of the priority\ntree, and will cause the iteration order to change.\n\nLicense\nPriority is made available under the MIT License. For more details, see the\nLICENSE file in the repository.\n\nAuthors\nPriority is maintained by Cory Benfield, with contributions from others. For\nmore details about the contributors, please see CONTRIBUTORS.rst in the\nrepository.\n\n\n", "description": "HTTP/2 priority logic implementation for request prioritization."}, {"name": "preshed", "readme": "\n\n\n\nREADME.md\n\n\n\n\n\npreshed: Cython Hash Table for Pre-Hashed Keys\nSimple but high performance Cython hash table mapping pre-randomized keys to\nvoid* values. Inspired by\nJeff Preshing.\n\n\n\n\n\n\n", "description": "Hash table implementation optimized for pre-hashed keys."}, {"name": "pooch", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nAbout\nExample\nProjects using Pooch\nGetting involved\nLicense\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\nDocumentation (latest) \u2022\nDocumentation (main branch) \u2022\nContributing \u2022\nContact\n\n\nPart of the Fatiando a Terra project\n\n\n\n\n\n\n\n\nAbout\nDoes your Python package include sample datasets?\nAre you shipping them with the code?\nAre they getting too big?\nPooch is here to help! It will manage a data registry by downloading your\ndata files from a server only when needed and storing them locally in a data\ncache (a folder on your computer).\nHere are Pooch's main features:\n\nPure Python and minimal dependencies.\nDownload a file only if necessary (it's not in the data cache or needs to be\nupdated).\nVerify download integrity through SHA256 hashes (also used to check if a file\nneeds to be updated).\nDesigned to be extended: plug in custom download (FTP, scp, etc) and\npost-processing (unzip, decompress, rename) functions.\nIncludes utilities to unzip/decompress the data upon download to save loading\ntime.\nCan handle basic HTTP authentication (for servers that require a login) and\nprinting download progress bars.\nEasily set up an environment variable to overwrite the data cache location.\n\nAre you a scientist or researcher? Pooch can help you too!\n\nAutomatically download your data files so you don't have to keep them in your\nGitHub repository.\nMake sure everyone running the code has the same version of the data files\n(enforced through the SHA256 hashes).\n\nExample\nFor a scientist downloading a data file for analysis:\nimport pooch\nimport pandas as pd\n\n# Download a file and save it locally, returning the path to it.\n# Running this again will not cause a download. Pooch will check the hash\n# (checksum) of the downloaded file against the given value to make sure\n# it's the right file (not corrupted or outdated).\nfname_bathymetry = pooch.retrieve(\n    url=\"https://github.com/fatiando-data/caribbean-bathymetry/releases/download/v1/caribbean-bathymetry.csv.xz\",\n    known_hash=\"md5:a7332aa6e69c77d49d7fb54b764caa82\",\n)\n\n# Pooch can also download based on a DOI from certain providers.\nfname_gravity = pooch.retrieve(\n    url=\"doi:10.5281/zenodo.5882430/southern-africa-gravity.csv.xz\",\n    known_hash=\"md5:1dee324a14e647855366d6eb01a1ef35\",\n)\n\n# Load the data with Pandas\ndata_bathymetry = pd.read_csv(fname_bathymetry)\ndata_gravity = pd.read_csv(fname_gravity)\nFor package developers including sample data in their projects:\n\"\"\"\nModule mypackage/datasets.py\n\"\"\"\nimport pkg_resources\nimport pandas\nimport pooch\n\n# Get the version string from your project. You have one of these, right?\nfrom . import version\n\n# Create a new friend to manage your sample data storage\nGOODBOY = pooch.create(\n    # Folder where the data will be stored. For a sensible default, use the\n    # default cache folder for your OS.\n    path=pooch.os_cache(\"mypackage\"),\n    # Base URL of the remote data store. Will call .format on this string\n    # to insert the version (see below).\n    base_url=\"https://github.com/myproject/mypackage/raw/{version}/data/\",\n    # Pooches are versioned so that you can use multiple versions of a\n    # package simultaneously. Use PEP440 compliant version number. The\n    # version will be appended to the path.\n    version=version,\n    # If a version as a \"+XX.XXXXX\" suffix, we'll assume that this is a dev\n    # version and replace the version with this string.\n    version_dev=\"main\",\n    # An environment variable that overwrites the path.\n    env=\"MYPACKAGE_DATA_DIR\",\n    # The cache file registry. A dictionary with all files managed by this\n    # pooch. Keys are the file names (relative to *base_url*) and values\n    # are their respective SHA256 hashes. Files will be downloaded\n    # automatically when needed (see fetch_gravity_data).\n    registry={\"gravity-data.csv\": \"89y10phsdwhs09whljwc09whcowsdhcwodcydw\"}\n)\n# You can also load the registry from a file. Each line contains a file\n# name and it's sha256 hash separated by a space. This makes it easier to\n# manage large numbers of data files. The registry file should be packaged\n# and distributed with your software.\nGOODBOY.load_registry(\n    pkg_resources.resource_stream(\"mypackage\", \"registry.txt\")\n)\n\n# Define functions that your users can call to get back the data in memory\ndef fetch_gravity_data():\n    \"\"\"\n    Load some sample gravity data to use in your docs.\n    \"\"\"\n    # Fetch the path to a file in the local storage. If it's not there,\n    # we'll download it.\n    fname = GOODBOY.fetch(\"gravity-data.csv\")\n    # Load it with numpy/pandas/etc\n    data = pandas.read_csv(fname)\n    return data\nProjects using Pooch\n\nSciPy\nscikit-image\nMetPy\nicepack\nhistolab\nseaborn-image\nEnsaio\nOpen AR-Sandbox\nclimlab\nnapari\nmne-python\nGemGIS\n\nIf you're using Pooch, send us a pull request adding your project to the list.\nGetting involved\n\ud83d\udde8\ufe0f Contact us:\nFind out more about how to reach us at\nfatiando.org/contact.\n\ud83d\udc69\ud83c\udffe\u200d\ud83d\udcbb Contributing to project development:\nPlease read our\nContributing Guide\nto see how you can help and give feedback.\n\ud83e\uddd1\ud83c\udffe\u200d\ud83e\udd1d\u200d\ud83e\uddd1\ud83c\udffc Code of conduct:\nThis project is released with a\nCode of Conduct.\nBy participating in this project you agree to abide by its terms.\n\nImposter syndrome disclaimer:\nWe want your help. No, really. There may be a little voice inside your\nhead that is telling you that you're not ready, that you aren't skilled\nenough to contribute. We assure you that the little voice in your head is\nwrong. Most importantly, there are many valuable ways to contribute besides\nwriting code.\nThis disclaimer was adapted from the\nMetPy project.\n\nLicense\nThis is free software: you can redistribute it and/or modify it under the terms\nof the BSD 3-clause License. A copy of this license is provided in\nLICENSE.txt.\n\n\n", "description": "Manage and version control data files for Python projects."}, {"name": "pluggy", "readme": "\n\n\n\n\n\n\n\n\n\n\n\npluggy - A minimalist production ready plugin system\nA definitive example\n\n\n\n\n\nREADME.rst\n\n\n\n\npluggy - A minimalist production ready plugin system\n\n \n \n \n \n \n \n\nThis is the core framework used by the pytest, tox, and devpi projects.\nPlease read the docs to learn more!\n\nA definitive example\nimport pluggy\n\nhookspec = pluggy.HookspecMarker(\"myproject\")\nhookimpl = pluggy.HookimplMarker(\"myproject\")\n\n\nclass MySpec:\n    \"\"\"A hook specification namespace.\"\"\"\n\n    @hookspec\n    def myhook(self, arg1, arg2):\n        \"\"\"My special little hook that you can customize.\"\"\"\n\n\nclass Plugin_1:\n    \"\"\"A hook implementation namespace.\"\"\"\n\n    @hookimpl\n    def myhook(self, arg1, arg2):\n        print(\"inside Plugin_1.myhook()\")\n        return arg1 + arg2\n\n\nclass Plugin_2:\n    \"\"\"A 2nd hook implementation namespace.\"\"\"\n\n    @hookimpl\n    def myhook(self, arg1, arg2):\n        print(\"inside Plugin_2.myhook()\")\n        return arg1 - arg2\n\n\n# create a manager and add the spec\npm = pluggy.PluginManager(\"myproject\")\npm.add_hookspecs(MySpec)\n\n# register plugins\npm.register(Plugin_1())\npm.register(Plugin_2())\n\n# call our ``myhook`` hook\nresults = pm.hook.myhook(arg1=1, arg2=2)\nprint(results)\nRunning this directly gets us:\n$ python docs/examples/toy-example.py\ninside Plugin_2.myhook()\ninside Plugin_1.myhook()\n[-1, 3]\n\n\n\n", "description": "Plugin and hook calling mechanisms for Python."}, {"name": "plotnine", "readme": "\nplotnine\n\n\n\n\n\n\nplotnine is an implementation of a grammar of graphics in Python\nbased on ggplot2.\nThe grammar allows you to compose plots by explicitly mapping variables in a\ndataframe to the visual objects that make up the plot.\n\nPlotting with a grammar of graphics is powerful. Custom (and otherwise\ncomplex) plots are easy to think about and build incrementaly, while the\nsimple plots remain simple to create.\nTo learn more about how to use plotnine, check out the\ndocumentation. Since plotnine\nhas an API similar to ggplot2, where it lacks in coverage the\nggplot2 documentation\nmay be helpful.\nExample\nfrom plotnine import *\nfrom plotnine.data import mtcars\n\nBuilding a complex plot piece by piece.\n\n\nScatter plot\n(ggplot(mtcars, aes(\"wt\", \"mpg\"))\n + geom_point())\n\n\n\n\nScatter plot colored according some variable\n(ggplot(mtcars, aes(\"wt\", \"mpg\", color=\"factor(gear)\"))\n + geom_point())\n\n\n\n\nScatter plot colored according some variable and\nsmoothed with a linear model with confidence intervals.\n(ggplot(mtcars, aes(\"wt\", \"mpg\", color=\"factor(gear)\"))\n + geom_point()\n + stat_smooth(method=\"lm\"))\n\n\n\n\nScatter plot colored according some variable,\nsmoothed with a linear model with confidence intervals and\nplotted on separate panels.\n(ggplot(mtcars, aes(\"wt\", \"mpg\", color=\"factor(gear)\"))\n + geom_point()\n + stat_smooth(method=\"lm\")\n + facet_wrap(\"~gear\"))\n\n\n\n\nAdjust the themes\nI) Make it playful\n(ggplot(mtcars, aes(\"wt\", \"mpg\", color=\"factor(gear)\"))\n + geom_point()\n + stat_smooth(method=\"lm\")\n + facet_wrap(\"~gear\")\n + theme_xkcd())\n\n\nII) Or professional\n(ggplot(mtcars, aes(\"wt\", \"mpg\", color=\"factor(gear)\"))\n + geom_point()\n + stat_smooth(method=\"lm\")\n + facet_wrap(\"~gear\")\n + theme_tufte())\n\n\n\n\nInstallation\nOfficial release\n# Using pip\n$ pip install plotnine             # 1. should be sufficient for most\n$ pip install 'plotnine[extra]'    # 2. includes extra/optional packages\n$ pip install 'plotnine[test]'     # 3. testing\n$ pip install 'plotnine[doc]'      # 4. generating docs\n$ pip install 'plotnine[dev]'      # 5. development (making releases)\n$ pip install 'plotnine[all]'      # 6. everyting\n\n# Or using conda\n$ conda install -c conda-forge plotnine\n\nDevelopment version\n$ pip install git+https://github.com/has2k1/plotnine.git\n\nContributing\nOur documentation could use some examples, but we are looking for something\na little bit special. We have two criteria:\n\nSimple looking plots that otherwise require a trick or two.\nPlots that are part of a data analytic narrative. That is, they provide\nsome form of clarity showing off the geom, stat, ... at their\ndifferential best.\n\nIf you come up with something that meets those criteria, we would love to\nsee it. See plotnine-examples.\nIf you discover a bug checkout the issues\nif it has not been reported, yet please file an issue.\nAnd if you can fix a bug, your contribution is welcome.\nTesting\nPlotnine has tests that generate images which are compared to baseline images known\nto be correct. To generate images that are consistent across all systems you have\nto install matplotlib from source. You can do that with pip using the command.\n$ pip install matplotlib --no-binary matplotlib\n\nOtherwise there may be small differences in the text rendering that throw off the\nimage comparisons.\n", "description": "Grammar of graphics style plotting library based on ggplot2 for Python."}, {"name": "plotly", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nplotly.py\nQuickstart\nOverview\nInstallation\nJupyterLab Support\nJupyter Notebook Support\nStatic Image Export\nKaleido\nOrca\nExtended Geo Support\nMigration\nCopyright and Licenses\n\n\n\n\n\nREADME.md\n\n\n\n\nplotly.py\n\n\nLatest Release\n\n\n\n\n\n\nUser forum\n\n\n\n\n\n\nPyPI Downloads\n\n\n\n\n\n\nLicense\n\n\n\n\n\n\nQuickstart\npip install plotly==5.16.1\nInside Jupyter (installable with pip install \"jupyterlab>=3\" \"ipywidgets>=7.6\"):\nimport plotly.express as px\nfig = px.bar(x=[\"a\", \"b\", \"c\"], y=[1, 3, 2])\nfig.show()\nSee the Python documentation for more examples.\nOverview\nplotly.py is an interactive, open-source, and browser-based graphing library for Python \u2728\nBuilt on top of plotly.js, plotly.py is a high-level, declarative charting library. plotly.js ships with over 30 chart types, including scientific charts, 3D graphs, statistical charts, SVG maps, financial charts, and more.\nplotly.py is MIT Licensed. Plotly graphs can be viewed in Jupyter notebooks, standalone HTML files, or integrated into Dash applications.\nContact us for consulting, dashboard development, application integration, and feature additions.\n\n\n\n\n\n\nOnline Documentation\nContributing to plotly\nChangelog\nCode of Conduct\nVersion 4 Migration Guide\nNew! Announcing Dash 1.0\nCommunity forum\n\n\nInstallation\nplotly.py may be installed using pip...\npip install plotly==5.16.1\n\nor conda.\nconda install -c plotly plotly=5.16.1\n\nJupyterLab Support\nFor use in JupyterLab, install the jupyterlab and ipywidgets\npackages using pip:\npip install \"jupyterlab>=3\" \"ipywidgets>=7.6\"\n\nor conda:\nconda install \"jupyterlab>=3\" \"ipywidgets>=7.6\"\n\nThe instructions above apply to JupyterLab 3.x. For JupyterLab 2 or earlier, run the following commands to install the required JupyterLab extensions (note that this will require node to be installed):\n# JupyterLab 2.x renderer support\njupyter labextension install jupyterlab-plotly@5.16.1 @jupyter-widgets/jupyterlab-manager\n\nPlease check out our Troubleshooting guide if you run into any problems with JupyterLab.\nJupyter Notebook Support\nFor use in the Jupyter Notebook, install the notebook and ipywidgets\npackages using pip:\npip install \"notebook>=5.3\" \"ipywidgets>=7.5\"\n\nor conda:\nconda install \"notebook>=5.3\" \"ipywidgets>=7.5\"\n\nStatic Image Export\nplotly.py supports static image export,\nusing either the kaleido\npackage (recommended, supported as of plotly version 4.9) or the orca\ncommand line utility (legacy as of plotly version 4.9).\nKaleido\nThe kaleido package has no dependencies and can be installed\nusing pip...\npip install -U kaleido\n\nor conda.\nconda install -c conda-forge python-kaleido\n\nOrca\nWhile Kaleido is now the recommended image export approach because it is easier to install\nand more widely compatible, static image export\ncan also be supported\nby the legacy orca command line utility and the\npsutil Python package.\nThese dependencies can both be installed using conda:\nconda install -c plotly plotly-orca==1.3.1 psutil\n\nOr, psutil can be installed using pip...\npip install psutil\n\nand orca can be installed according to the instructions in the orca README.\nExtended Geo Support\nSome plotly.py features rely on fairly large geographic shape files. The county\nchoropleth figure factory is one such example. These shape files are distributed as a\nseparate plotly-geo package. This package can be installed using pip...\npip install plotly-geo==1.0.0\n\nor conda\nconda install -c plotly plotly-geo=1.0.0\n\nMigration\nIf you're migrating from plotly.py v3 to v4, please check out the Version 4 migration guide\nIf you're migrating from plotly.py v2 to v3, please check out the Version 3 migration guide\nCopyright and Licenses\nCode and documentation copyright 2019 Plotly, Inc.\nCode released under the MIT license.\nDocs released under the Creative Commons license.\n\n\n", "description": "Interactive, browser-based charting library for Python."}, {"name": "platformdirs", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nThe problem\nplatformdirs to the rescue\nExample output\nPlatformDirs for convenience\nPer-version isolation\nWhy this Fork?\n\n\n\n\n\nREADME.rst\n\n\n\n\n\nThe problem\n\n\nWhen writing desktop application, finding the right location to store user data\nand configuration varies per platform. Even for single-platform apps, there\nmay by plenty of nuances in figuring out the right location.\nFor example, if running on macOS, you should use:\n~/Library/Application Support/<AppName>\n\nIf on Windows (at least English Win) that should be:\nC:\\Documents and Settings\\<User>\\Application Data\\Local Settings\\<AppAuthor>\\<AppName>\n\nor possibly:\nC:\\Documents and Settings\\<User>\\Application Data\\<AppAuthor>\\<AppName>\n\nfor roaming profiles but that is another story.\nOn Linux (and other Unices), according to the XDG Basedir Spec, it should be:\n~/.local/share/<AppName>\n\n\nplatformdirs to the rescue\nThis kind of thing is what the platformdirs package is for.\nplatformdirs will help you choose an appropriate:\n\nuser data dir (user_data_dir)\nuser config dir (user_config_dir)\nuser cache dir (user_cache_dir)\nsite data dir (site_data_dir)\nsite config dir (site_config_dir)\nuser log dir (user_log_dir)\nuser documents dir (user_documents_dir)\nuser downloads dir (user_downloads_dir)\nuser pictures dir (user_pictures_dir)\nuser videos dir (user_videos_dir)\nuser music dir (user_music_dir)\nuser desktop dir (user_desktop_dir)\nuser runtime dir (user_runtime_dir)\n\nAnd also:\n\nIs slightly opinionated on the directory names used. Look for \"OPINION\" in\ndocumentation and code for when an opinion is being applied.\n\n\nExample output\nOn macOS:\n>>> from platformdirs import *\n>>> appname = \"SuperApp\"\n>>> appauthor = \"Acme\"\n>>> user_data_dir(appname, appauthor)\n'/Users/trentm/Library/Application Support/SuperApp'\n>>> site_data_dir(appname, appauthor)\n'/Library/Application Support/SuperApp'\n>>> user_cache_dir(appname, appauthor)\n'/Users/trentm/Library/Caches/SuperApp'\n>>> user_log_dir(appname, appauthor)\n'/Users/trentm/Library/Logs/SuperApp'\n>>> user_documents_dir()\n'/Users/trentm/Documents'\n>>> user_downloads_dir()\n'/Users/trentm/Downloads'\n>>> user_pictures_dir()\n'/Users/trentm/Pictures'\n>>> user_videos_dir()\n'/Users/trentm/Movies'\n>>> user_music_dir()\n'/Users/trentm/Music'\n>>> user_desktop_dir()\n'/Users/trentm/Desktop'\n>>> user_runtime_dir(appname, appauthor)\n'/Users/trentm/Library/Caches/TemporaryItems/SuperApp'\nOn Windows:\n>>> from platformdirs import *\n>>> appname = \"SuperApp\"\n>>> appauthor = \"Acme\"\n>>> user_data_dir(appname, appauthor)\n'C:\\\\Users\\\\trentm\\\\AppData\\\\Local\\\\Acme\\\\SuperApp'\n>>> user_data_dir(appname, appauthor, roaming=True)\n'C:\\\\Users\\\\trentm\\\\AppData\\\\Roaming\\\\Acme\\\\SuperApp'\n>>> user_cache_dir(appname, appauthor)\n'C:\\\\Users\\\\trentm\\\\AppData\\\\Local\\\\Acme\\\\SuperApp\\\\Cache'\n>>> user_log_dir(appname, appauthor)\n'C:\\\\Users\\\\trentm\\\\AppData\\\\Local\\\\Acme\\\\SuperApp\\\\Logs'\n>>> user_documents_dir()\n'C:\\\\Users\\\\trentm\\\\Documents'\n>>> user_downloads_dir()\n'C:\\\\Users\\\\trentm\\\\Downloads'\n>>> user_pictures_dir()\n'C:\\\\Users\\\\trentm\\\\Pictures'\n>>> user_videos_dir()\n'C:\\\\Users\\\\trentm\\\\Videos'\n>>> user_music_dir()\n'C:\\\\Users\\\\trentm\\\\Music'\n>>> user_desktop_dir()\n'C:\\\\Users\\\\trentm\\\\Desktop'\n>>> user_runtime_dir(appname, appauthor)\n'C:\\\\Users\\\\trentm\\\\AppData\\\\Local\\\\Temp\\\\Acme\\\\SuperApp'\nOn Linux:\n>>> from platformdirs import *\n>>> appname = \"SuperApp\"\n>>> appauthor = \"Acme\"\n>>> user_data_dir(appname, appauthor)\n'/home/trentm/.local/share/SuperApp'\n>>> site_data_dir(appname, appauthor)\n'/usr/local/share/SuperApp'\n>>> site_data_dir(appname, appauthor, multipath=True)\n'/usr/local/share/SuperApp:/usr/share/SuperApp'\n>>> user_cache_dir(appname, appauthor)\n'/home/trentm/.cache/SuperApp'\n>>> user_log_dir(appname, appauthor)\n'/home/trentm/.cache/SuperApp/log'\n>>> user_config_dir(appname)\n'/home/trentm/.config/SuperApp'\n>>> user_documents_dir()\n'/home/trentm/Documents'\n>>> user_downloads_dir()\n'/home/trentm/Downloads'\n>>> user_pictures_dir()\n'/home/trentm/Pictures'\n>>> user_videos_dir()\n'/home/trentm/Videos'\n>>> user_music_dir()\n'/home/trentm/Music'\n>>> user_desktop_dir()\n'/home/trentm/Desktop'\n>>> user_runtime_dir(appname, appauthor)\n'/run/user/{os.getuid()}/SuperApp'\n>>> site_config_dir(appname)\n'/etc/xdg/SuperApp'\n>>> os.environ[\"XDG_CONFIG_DIRS\"] = \"/etc:/usr/local/etc\"\n>>> site_config_dir(appname, multipath=True)\n'/etc/SuperApp:/usr/local/etc/SuperApp'\nOn Android:\n>>> from platformdirs import *\n>>> appname = \"SuperApp\"\n>>> appauthor = \"Acme\"\n>>> user_data_dir(appname, appauthor)\n'/data/data/com.myApp/files/SuperApp'\n>>> user_cache_dir(appname, appauthor)\n'/data/data/com.myApp/cache/SuperApp'\n>>> user_log_dir(appname, appauthor)\n'/data/data/com.myApp/cache/SuperApp/log'\n>>> user_config_dir(appname)\n'/data/data/com.myApp/shared_prefs/SuperApp'\n>>> user_documents_dir()\n'/storage/emulated/0/Documents'\n>>> user_downloads_dir()\n'/storage/emulated/0/Downloads'\n>>> user_pictures_dir()\n'/storage/emulated/0/Pictures'\n>>> user_videos_dir()\n'/storage/emulated/0/DCIM/Camera'\n>>> user_music_dir()\n'/storage/emulated/0/Music'\n>>> user_desktop_dir()\n'/storage/emulated/0/Desktop'\n>>> user_runtime_dir(appname, appauthor)\n'/data/data/com.myApp/cache/SuperApp/tmp'\n\nNote: Some android apps like Termux and Pydroid are used as shells. These\napps are used by the end user to emulate Linux environment. Presence of\nSHELL environment variable is used by Platformdirs to differentiate\nbetween general android apps and android apps used as shells. Shell android\napps also support XDG_* environment variables.\n\nPlatformDirs for convenience\n>>> from platformdirs import PlatformDirs\n>>> dirs = PlatformDirs(\"SuperApp\", \"Acme\")\n>>> dirs.user_data_dir\n'/Users/trentm/Library/Application Support/SuperApp'\n>>> dirs.site_data_dir\n'/Library/Application Support/SuperApp'\n>>> dirs.user_cache_dir\n'/Users/trentm/Library/Caches/SuperApp'\n>>> dirs.user_log_dir\n'/Users/trentm/Library/Logs/SuperApp'\n>>> dirs.user_documents_dir\n'/Users/trentm/Documents'\n>>> dirs.user_downloads_dir\n'/Users/trentm/Downloads'\n>>> dirs.user_pictures_dir\n'/Users/trentm/Pictures'\n>>> dirs.user_videos_dir\n'/Users/trentm/Movies'\n>>> dirs.user_music_dir\n'/Users/trentm/Music'\n>>> dirs.user_desktop_dir\n'/Users/trentm/Desktop'\n>>> dirs.user_runtime_dir\n'/Users/trentm/Library/Caches/TemporaryItems/SuperApp'\n\nPer-version isolation\nIf you have multiple versions of your app in use that you want to be\nable to run side-by-side, then you may want version-isolation for these\ndirs:\n>>> from platformdirs import PlatformDirs\n>>> dirs = PlatformDirs(\"SuperApp\", \"Acme\", version=\"1.0\")\n>>> dirs.user_data_dir\n'/Users/trentm/Library/Application Support/SuperApp/1.0'\n>>> dirs.site_data_dir\n'/Library/Application Support/SuperApp/1.0'\n>>> dirs.user_cache_dir\n'/Users/trentm/Library/Caches/SuperApp/1.0'\n>>> dirs.user_log_dir\n'/Users/trentm/Library/Logs/SuperApp/1.0'\n>>> dirs.user_documents_dir\n'/Users/trentm/Documents'\n>>> dirs.user_downloads_dir\n'/Users/trentm/Downloads'\n>>> dirs.user_pictures_dir\n'/Users/trentm/Pictures'\n>>> dirs.user_videos_dir\n'/Users/trentm/Movies'\n>>> dirs.user_music_dir\n'/Users/trentm/Music'\n>>> dirs.user_desktop_dir\n'/Users/trentm/Desktop'\n>>> dirs.user_runtime_dir\n'/Users/trentm/Library/Caches/TemporaryItems/SuperApp/1.0'\n\nBe wary of using this for configuration files though; you'll need to handle\nmigrating configuration files manually.\n\nWhy this Fork?\nThis repository is a friendly fork of the wonderful work started by\nActiveState who created\nappdirs, this package's ancestor.\nMaintaining an open source project is no easy task, particularly\nfrom within an organization, and the Python community is indebted\nto appdirs (and to Trent Mick and Jeff Rouse in particular) for\ncreating an incredibly useful simple module, as evidenced by the wide\nnumber of users it has attracted over the years.\nNonetheless, given the number of long-standing open issues\nand pull requests, and no clear path towards ensuring\nthat maintenance of the package would continue or grow, this fork was\ncreated.\nContributions are most welcome.\n\n\n", "description": "Provides platform-appropriate paths for application data, config, cache and log files."}, {"name": "pkgutil-resolve-name", "readme": "\n\n\n\nREADME.rst\n\n\n\n\npkgutil-resolve-name\nA backport of Python 3.9's pkgutil.resolve_name.\nSee the Python 3.9 documentation.\n\n\n"}, {"name": "Pillow", "readme": "\n\n\n\nPillow\nPython Imaging Library (Fork)\nPillow is the friendly PIL fork by Jeffrey A. Clark (Alex) and\ncontributors.\nPIL is the Python Imaging Library by Fredrik Lundh and Contributors.\nAs of 2019, Pillow development is\nsupported by Tidelift.\n\n\ndocs\n\n\n\n\n\ntests\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npackage\n\n\n\n\n\n\n\n\n\nsocial\n\n\n\n\n\n\n\nOverview\nThe Python Imaging Library adds image processing capabilities to your Python interpreter.\nThis library provides extensive file format support, an efficient internal representation, and fairly powerful image processing capabilities.\nThe core image library is designed for fast access to data stored in a few basic pixel formats. It should provide a solid foundation for a general image processing tool.\nMore Information\n\nDocumentation\n\nInstallation\nHandbook\n\n\nContribute\n\nIssues\nPull requests\n\n\nRelease notes\nChangelog\n\nPre-fork\n\n\n\nReport a Vulnerability\nTo report a security vulnerability, please follow the procedure described in the Tidelift security policy.\n", "description": "Fork of Python Imaging Library to add image processing capabilities to Python.", "category": "Image processing"}, {"name": "pickleshare", "readme": "\n\n\n\nREADME.md\n\n\n\n\nPickleShare - a small 'shelve' like datastore with concurrency support\nLike shelve, a PickleShareDB object acts like a normal dictionary. Unlike shelve,\nmany processes can access the database simultaneously. Changing a value in\ndatabase is immediately visible to other processes accessing the same database.\nConcurrency is possible because the values are stored in separate files. Hence\nthe \"database\" is a directory where all files are governed by PickleShare.\nBoth python2 and python3 are supported.\nExample usage:\nfrom pickleshare import *\ndb = PickleShareDB('~/testpickleshare')\ndb.clear()\nprint(\"Should be empty:\", db.items())\ndb['hello'] = 15\ndb['aku ankka'] = [1,2,313]\ndb['paths/are/ok/key'] = [1,(5,46)]\nprint(db.keys())\nThis module is certainly not ZODB, but can be used for low-load\n(non-mission-critical) situations where tiny code size trumps the\nadvanced features of a \"real\" object database.\nInstallation guide:\npip install pickleshare\nOr, if installing from source\npip install .\n\n\n", "description": "Small 'shelve' like datastore for Python with concurrent access support."}, {"name": "pexpect", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n", "description": "Pexpect allows easy control of interactive console applications."}, {"name": "pdfrw", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npdfrw 0.4\n1\u00a0\u00a0\u00a0Introduction\n2\u00a0\u00a0\u00a0Examples\n2.1\u00a0\u00a0\u00a0All examples\n2.2\u00a0\u00a0\u00a0Notes on selected examples\n2.2.1\u00a0\u00a0\u00a0Reorganizing pages and placing them two-up\n2.2.2\u00a0\u00a0\u00a0Adding or modifying metadata\n2.2.3\u00a0\u00a0\u00a0Rotating and doubling\n2.2.4\u00a0\u00a0\u00a0Graphics stream parsing proof of concept\n3\u00a0\u00a0\u00a0pdfrw philosophy\n3.1\u00a0\u00a0\u00a0Core library\n3.2\u00a0\u00a0\u00a0Examples\n4\u00a0\u00a0\u00a0PDF files and Python\n4.1\u00a0\u00a0\u00a0Introduction\n4.2\u00a0\u00a0\u00a0Difficulties\n4.3\u00a0\u00a0\u00a0Usage Model\n4.3.1\u00a0\u00a0\u00a0Reading PDFs\n4.3.2\u00a0\u00a0\u00a0Writing PDFs\n4.3.3\u00a0\u00a0\u00a0Manipulating PDFs in memory\n4.3.4\u00a0\u00a0\u00a0Missing features\n5\u00a0\u00a0\u00a0Library internals\n5.1\u00a0\u00a0\u00a0Introduction\n5.2\u00a0\u00a0\u00a0PDF object model support\n5.2.1\u00a0\u00a0\u00a0Ordinary objects\n5.2.2\u00a0\u00a0\u00a0Name objects\n5.2.3\u00a0\u00a0\u00a0String objects\n5.2.4\u00a0\u00a0\u00a0Array objects\n5.2.5\u00a0\u00a0\u00a0Dict objects\n5.2.6\u00a0\u00a0\u00a0Proxy objects\n5.3\u00a0\u00a0\u00a0File reading, tokenization and parsing\n5.4\u00a0\u00a0\u00a0File output\n5.5\u00a0\u00a0\u00a0Advanced features\n5.6\u00a0\u00a0\u00a0Miscellaneous\n6\u00a0\u00a0\u00a0Testing\n7\u00a0\u00a0\u00a0Other libraries\n7.1\u00a0\u00a0\u00a0Pure Python\n7.2\u00a0\u00a0\u00a0non-pure-Python libraries\n7.3\u00a0\u00a0\u00a0Other tools\n8\u00a0\u00a0\u00a0Release information\n\n\n\n\n\nREADME.rst\n\n\n\n\npdfrw 0.4\n\n\nAuthor:\nPatrick Maupin\n\n\n\nContents\n\n1\u00a0\u00a0\u00a0Introduction\n2\u00a0\u00a0\u00a0Examples\n2.1\u00a0\u00a0\u00a0All examples\n2.2\u00a0\u00a0\u00a0Notes on selected examples\n2.2.1\u00a0\u00a0\u00a0Reorganizing pages and placing them two-up\n2.2.2\u00a0\u00a0\u00a0Adding or modifying metadata\n2.2.3\u00a0\u00a0\u00a0Rotating and doubling\n2.2.4\u00a0\u00a0\u00a0Graphics stream parsing proof of concept\n\n\n\n\n3\u00a0\u00a0\u00a0pdfrw philosophy\n3.1\u00a0\u00a0\u00a0Core library\n3.2\u00a0\u00a0\u00a0Examples\n\n\n4\u00a0\u00a0\u00a0PDF files and Python\n4.1\u00a0\u00a0\u00a0Introduction\n4.2\u00a0\u00a0\u00a0Difficulties\n4.3\u00a0\u00a0\u00a0Usage Model\n4.3.1\u00a0\u00a0\u00a0Reading PDFs\n4.3.2\u00a0\u00a0\u00a0Writing PDFs\n4.3.3\u00a0\u00a0\u00a0Manipulating PDFs in memory\n4.3.4\u00a0\u00a0\u00a0Missing features\n\n\n\n\n5\u00a0\u00a0\u00a0Library internals\n5.1\u00a0\u00a0\u00a0Introduction\n5.2\u00a0\u00a0\u00a0PDF object model support\n5.2.1\u00a0\u00a0\u00a0Ordinary objects\n5.2.2\u00a0\u00a0\u00a0Name objects\n5.2.3\u00a0\u00a0\u00a0String objects\n5.2.4\u00a0\u00a0\u00a0Array objects\n5.2.5\u00a0\u00a0\u00a0Dict objects\n5.2.6\u00a0\u00a0\u00a0Proxy objects\n\n\n5.3\u00a0\u00a0\u00a0File reading, tokenization and parsing\n5.4\u00a0\u00a0\u00a0File output\n5.5\u00a0\u00a0\u00a0Advanced features\n5.6\u00a0\u00a0\u00a0Miscellaneous\n\n\n6\u00a0\u00a0\u00a0Testing\n7\u00a0\u00a0\u00a0Other libraries\n7.1\u00a0\u00a0\u00a0Pure Python\n7.2\u00a0\u00a0\u00a0non-pure-Python libraries\n7.3\u00a0\u00a0\u00a0Other tools\n\n\n8\u00a0\u00a0\u00a0Release information\n\n\n\n1\u00a0\u00a0\u00a0Introduction\npdfrw is a Python library and utility that reads and writes PDF files:\n\nVersion 0.4 is tested and works on Python 2.6, 2.7, 3.3, 3.4, 3.5, and 3.6\nOperations include subsetting, merging, rotating, modifying metadata, etc.\nThe fastest pure Python PDF parser available\nHas been used for years by a printer in pre-press production\nCan be used with rst2pdf to faithfully reproduce vector images\nCan be used either standalone, or in conjunction with reportlab\nto reuse existing PDFs in new ones\nPermissively licensed\n\npdfrw will faithfully reproduce vector formats without\nrasterization, so the rst2pdf package has used pdfrw\nfor PDF and SVG images by default since March 2010.\npdfrw can also be used in conjunction with reportlab, in order\nto re-use portions of existing PDFs in new PDFs created with\nreportlab.\n\n2\u00a0\u00a0\u00a0Examples\nThe library comes with several examples that show operation both with\nand without reportlab.\n\n2.1\u00a0\u00a0\u00a0All examples\nThe examples directory has a few scripts which use the library.\nNote that if these examples do not work with your PDF, you should\ntry to use pdftk to uncompress and/or unencrypt them first.\n\n4up.py will shrink pages down and place 4 of them on\neach output page.\nalter.py shows an example of modifying metadata, without\naltering the structure of the PDF.\nbooklet.py shows an example of creating a 2-up output\nsuitable for printing and folding (e.g on tabloid size paper).\ncat.py shows an example of concatenating multiple PDFs together.\nextract.py will extract images and Form XObjects (embedded pages)\nfrom existing PDFs to make them easier to use and refer to from\nnew PDFs (e.g. with reportlab or rst2pdf).\nposter.py increases the size of a PDF so it can be printed\nas a poster.\nprint_two.py Allows creation of 8.5 X 5.5\" booklets by slicing\n8.5 X 11\" paper apart after printing.\nrotate.py Rotates all or selected pages in a PDF.\nsubset.py Creates a new PDF with only a subset of pages from the\noriginal.\nunspread.py Takes a 2-up PDF, and splits out pages.\nwatermark.py Adds a watermark PDF image over or under all the pages\nof a PDF.\nrl1/4up.py Another 4up example, using reportlab canvas for output.\nrl1/booklet.py Another booklet example, using reportlab canvas for\noutput.\nrl1/subset.py Another subsetting example, using reportlab canvas for\noutput.\nrl1/platypus_pdf_template.py Another watermarking example, using\nreportlab canvas and generated output for the document.  Contributed\nby user asannes.\nrl2 Experimental code for parsing graphics.  Needs work.\nsubset_booklets.py shows an example of creating a full printable pdf\nversion in a more professional and pratical way ( take a look at\nhttp://www.wikihow.com/Bind-a-Book )\n\n\n2.2\u00a0\u00a0\u00a0Notes on selected examples\n\n2.2.1\u00a0\u00a0\u00a0Reorganizing pages and placing them two-up\nA printer with a fancy printer and/or a full-up copy of Acrobat can\neasily turn your small PDF into a little booklet (for example, print 4\nletter-sized pages on a single 11\" x 17\").\nBut that assumes several things, including that the personnel know how\nto operate the hardware and software. booklet.py lets you turn your PDF\ninto a preformatted booklet, to give them fewer chances to mess it up.\n\n2.2.2\u00a0\u00a0\u00a0Adding or modifying metadata\nThe cat.py example will accept multiple input files on the command\nline, concatenate them and output them to output.pdf, after adding some\nnonsensical metadata to the output PDF file.\nThe alter.py example alters a single metadata item in a PDF,\nand writes the result to a new PDF.\nOne difference is that, since cat is creating a new PDF structure,\nand alter is attempting to modify an existing PDF structure, the\nPDF produced by alter (and also by watermark.py) should be\nmore faithful to the original (except for the desired changes).\nFor example, the alter.py navigation should be left intact, whereas with\ncat.py it will be stripped.\n\n2.2.3\u00a0\u00a0\u00a0Rotating and doubling\nIf you ever want to print something that is like a small booklet, but\nneeds to be spiral bound, you either have to do some fancy rearranging,\nor just waste half your paper.\nThe print_two.py example program will, for example, make two side-by-side\ncopies each page of of your PDF on a each output sheet.\nBut, every other page is flipped, so that you can print double-sided and\nthe pages will line up properly and be pre-collated.\n\n2.2.4\u00a0\u00a0\u00a0Graphics stream parsing proof of concept\nThe copy.py script shows a simple example of reading in a PDF, and\nusing the decodegraphics.py module to try to write the same information\nout to a new PDF through a reportlab canvas. (If you know about reportlab,\nyou know that if you can faithfully render a PDF to a reportlab canvas, you\ncan do pretty much anything else with that PDF you want.) This kind of\nlow level manipulation should be done only if you really need to.\ndecodegraphics is really more than a proof of concept than anything\nelse. For most cases, just use the Form XObject capability, as shown in\nthe examples/rl1/booklet.py demo.\n\n3\u00a0\u00a0\u00a0pdfrw philosophy\n\n3.1\u00a0\u00a0\u00a0Core library\nThe philosophy of the library portion of pdfrw is to provide intuitive\nfunctions to read, manipulate, and write PDF files.  There should be\nminimal leakage between abstraction layers, although getting useful\nwork done makes \"pure\" functionality separation difficult.\nA key concept supported by the library is the use of Form XObjects,\nwhich allow easy embedding of pieces of one PDF into another.\nAddition of core support to the library is typically done carefully\nand thoughtfully, so as not to clutter it up with too many special\ncases.\nThere are a lot of incorrectly formatted PDFs floating around; support\nfor these is added in some cases.  The decision is often based on what\nacroread and okular do with the PDFs; if they can display them properly,\nthen eventually pdfrw should, too, if it is not too difficult or costly.\nContributions are welcome; one user has contributed some decompression\nfilters and the ability to process PDF 1.5 stream objects.  Additional\nfunctionality that would obviously be useful includes additional\ndecompression filters, the ability to process password-protected PDFs,\nand the ability to output linearized PDFs.\n\n3.2\u00a0\u00a0\u00a0Examples\nThe philosophy of the examples is to provide small, easily-understood\nexamples that showcase pdfrw functionality.\n\n4\u00a0\u00a0\u00a0PDF files and Python\n\n4.1\u00a0\u00a0\u00a0Introduction\nIn general, PDF files conceptually map quite well to Python. The major\nobjects to think about are:\n\nstrings. Most things are strings. These also often decompose\nnaturally into\nlists of tokens. Tokens can be combined to create higher-level\nobjects like\narrays and\ndictionaries and\nContents streams (which can be more streams of tokens)\n\n\n4.2\u00a0\u00a0\u00a0Difficulties\nThe apparent primary difficulty in mapping PDF files to Python is the\nPDF file concept of \"indirect objects.\"  Indirect objects provide\nthe efficiency of allowing a single piece of data to be referred to\nfrom more than one containing object, but probably more importantly,\nindirect objects provide a way to get around the chicken and egg\nproblem of circular object references when mapping arbitrary data\nstructures to files. To flatten out a circular reference, an indirect\nobject is referred to instead of being directly included in another\nobject. PDF files have a global mechanism for locating indirect objects,\nand they all have two reference numbers (a reference number and a\n\"generation\" number, in case you wanted to append to the PDF file\nrather than just rewriting the whole thing).\npdfrw automatically handles indirect references on reading in a PDF\nfile. When pdfrw encounters an indirect PDF file object, the\ncorresponding Python object it creates will have an 'indirect' attribute\nwith a value of True. When writing a PDF file, if you have created\narbitrary data, you just need to make sure that circular references are\nbroken up by putting an attribute named 'indirect' which evaluates to\nTrue on at least one object in every cycle.\nAnother PDF file concept that doesn't quite map to regular Python is a\n\"stream\". Streams are dictionaries which each have an associated\nunformatted data block. pdfrw handles streams by placing a special\nattribute on a subclassed dictionary.\n\n4.3\u00a0\u00a0\u00a0Usage Model\nThe usage model for pdfrw treats most objects as strings (it takes their\nstring representation when writing them to a file). The two main\nexceptions are the PdfArray object and the PdfDict object.\nPdfArray is a subclass of list with two special features.  First,\nan 'indirect' attribute allows a PdfArray to be written out as\nan indirect PDF object.  Second, pdfrw reads files lazily, so\nPdfArray knows about, and resolves references to other indirect\nobjects on an as-needed basis.\nPdfDict is a subclass of dict that also has an indirect attribute\nand lazy reference resolution as well.  (And the subclassed\nIndirectPdfDict has indirect automatically set True).\nBut PdfDict also has an optional associated stream. The stream object\ndefaults to None, but if you assign a stream to the dict, it will\nautomatically set the PDF /Length attribute for the dictionary.\nFinally, since PdfDict instances are indexed by PdfName objects (which\nalways start with a /) and since most (all?) standard Adobe PdfName\nobjects use names formatted like \"/CamelCase\", it makes sense to allow\naccess to dictionary elements via object attribute accesses as well as\nobject index accesses. So usage of PdfDict objects is normally via\nattribute access, although non-standard names (though still with a\nleading slash) can be accessed via dictionary index lookup.\n\n4.3.1\u00a0\u00a0\u00a0Reading PDFs\nThe PdfReader object is a subclass of PdfDict, which allows easy access\nto an entire document:\n>>> from pdfrw import PdfReader\n>>> x = PdfReader('source.pdf')\n>>> x.keys()\n['/Info', '/Size', '/Root']\n>>> x.Info\n{'/Producer': '(cairo 1.8.6 (http://cairographics.org))',\n '/Creator': '(cairo 1.8.6 (http://cairographics.org))'}\n>>> x.Root.keys()\n['/Type', '/Pages']\n\nInfo, Size, and Root are retrieved from the trailer of the PDF file.\nIn addition to the tree structure, pdfrw creates a special attribute\nnamed pages, that is a list of all the pages in the document. pdfrw\ncreates the pages attribute as a simplification for the user, because\nthe PDF format allows arbitrarily complicated nested dictionaries to\ndescribe the page order. Each entry in the pages list is the PdfDict\nobject for one of the pages in the file, in order.\n>>> len(x.pages)\n1\n>>> x.pages[0]\n{'/Parent': {'/Kids': [{...}], '/Type': '/Pages', '/Count': '1'},\n '/Contents': {'/Length': '11260', '/Filter': None},\n '/Resources': ... (Lots more stuff snipped)\n>>> x.pages[0].Contents\n{'/Length': '11260', '/Filter': None}\n>>> x.pages[0].Contents.stream\n'q\\n1 1 1 rg /a0 gs\\n0 0 0 RG 0.657436\n  w\\n0 J\\n0 j\\n[] 0.0 d\\n4 M q' ... (Lots more stuff snipped)\n\n\n4.3.2\u00a0\u00a0\u00a0Writing PDFs\nAs you can see, it is quite easy to dig down into a PDF document. But\nwhat about when it's time to write it out?\n>>> from pdfrw import PdfWriter\n>>> y = PdfWriter()\n>>> y.addpage(x.pages[0])\n>>> y.write('result.pdf')\n\nThat's all it takes to create a new PDF. You may still need to read the\nAdobe PDF reference manual to figure out what needs to go into\nthe PDF, but at least you don't have to sweat actually building it\nand getting the file offsets right.\n\n4.3.3\u00a0\u00a0\u00a0Manipulating PDFs in memory\nFor the most part, pdfrw tries to be agnostic about the contents of\nPDF files, and support them as containers, but to do useful work,\nsomething a little higher-level is required, so pdfrw works to\nunderstand a bit about the contents of the containers.  For example:\n\nPDF pages. pdfrw knows enough to find the pages in PDF files you read\nin, and to write a set of pages back out to a new PDF file.\nForm XObjects. pdfrw can take any page or rectangle on a page, and\nconvert it to a Form XObject, suitable for use inside another PDF\nfile.  It knows enough about these to perform scaling, rotation,\nand positioning.\nreportlab objects. pdfrw can recursively create a set of reportlab\nobjects from its internal object format. This allows, for example,\nForm XObjects to be used inside reportlab, so that you can reuse\ncontent from an existing PDF file when building a new PDF with\nreportlab.\n\nThere are several examples that demonstrate these features in\nthe example code directory.\n\n4.3.4\u00a0\u00a0\u00a0Missing features\nEven as a pure PDF container library, pdfrw comes up a bit short. It\ndoes not currently support:\n\nMost compression/decompression filters\nencryption\n\npdftk is a wonderful command-line\ntool that can convert your PDFs to remove encryption and compression.\nHowever, in most cases, you can do a lot of useful work with PDFs\nwithout actually removing compression, because only certain elements\ninside PDFs are actually compressed.\n\n5\u00a0\u00a0\u00a0Library internals\n\n5.1\u00a0\u00a0\u00a0Introduction\npdfrw currently consists of 19 modules organized into a main\npackage and one sub-package.\nThe __init.py__ module does the usual thing of importing a few\nmajor attributes from some of the submodules, and the errors.py\nmodule supports logging and exception generation.\n\n5.2\u00a0\u00a0\u00a0PDF object model support\nThe objects sub-package contains one module for each of the\ninternal representations of the kinds of basic objects that exist\nin a PDF file, with the objects/__init__.py module in that\npackage simply gathering them up and making them available to the\nmain pdfrw package.\nOne feature that all the PDF object classes have in common is the\ninclusion of an 'indirect' attribute. If 'indirect' exists and evaluates\nto True, then when the object is written out, it is written out as an\nindirect object. That is to say, it is addressable in the PDF file, and\ncould be referenced by any number (including zero) of container objects.\nThis indirect object capability saves space in PDF files by allowing\nobjects such as fonts to be referenced from multiple pages, and also\nallows PDF files to contain internal circular references.  This latter\ncapability is used, for example, when each page object has a \"parent\"\nobject in its dictionary.\n\n5.2.1\u00a0\u00a0\u00a0Ordinary objects\nThe objects/pdfobject.py module contains the PdfObject class, which is\na subclass of str, and is the catch-all object for any PDF file elements\nthat are not explicitly represented by other objects, as described below.\n\n5.2.2\u00a0\u00a0\u00a0Name objects\nThe objects/pdfname.py module contains the PdfName singleton object,\nwhich will convert a string into a PDF name by prepending a slash. It can\nbe used either by calling it or getting an attribute, e.g.:\nPdfName.Rotate == PdfName('Rotate') == PdfObject('/Rotate')\n\nIn the example above, there is a slight difference between the objects\nreturned from PdfName, and the object returned from PdfObject.  The\nPdfName objects are actually objects of class \"BasePdfName\".  This\nis important, because only these may be used as keys in PdfDict objects.\n\n5.2.3\u00a0\u00a0\u00a0String objects\nThe objects/pdfstring.py\nmodule contains the PdfString class, which is a subclass of str that is\nused to represent encoded strings in a PDF file. The class has encode\nand decode methods for the strings.\n\n5.2.4\u00a0\u00a0\u00a0Array objects\nThe objects/pdfarray.py\nmodule contains the PdfArray class, which is a subclass of list that is\nused to represent arrays in a PDF file. A regular list could be used\ninstead, but use of the PdfArray class allows for an indirect attribute\nto be set, and also allows for proxying of unresolved indirect objects\n(that haven't been read in yet) in a manner that is transparent to pdfrw\nclients.\n\n5.2.5\u00a0\u00a0\u00a0Dict objects\nThe objects/pdfdict.py\nmodule contains the PdfDict class, which is a subclass of dict that is\nused to represent dictionaries in a PDF file. A regular dict could be\nused instead, but the PdfDict class matches the requirements of PDF\nfiles more closely:\n\nTransparent (from the library client's viewpoint) proxying\nof unresolved indirect objects\nReturn of None for non-existent keys (like dict.get)\nMapping of attribute accesses to the dict itself\n(pdfdict.Foo == pdfdict[NameObject('Foo')])\nAutomatic management of following stream and /Length attributes\nfor content dictionaries\nIndirect attribute\nOther attributes may be set for private internal use of the\nlibrary and/or its clients.\nSupport for searching parent dictionaries for PDF \"inheritable\"\nattributes.\n\nIf a PdfDict has an associated data stream in the PDF file, the stream\nis accessed via the 'stream' (all lower-case) attribute.  Setting the\nstream attribute on the PdfDict will automatically set the /Length attribute\nas well.  If that is not what is desired (for example if the the stream\nis compressed), then _stream (same name with an underscore) may be used\nto associate the stream with the PdfDict without setting the length.\nTo set private attributes (that will not be written out to a new PDF\nfile) on a dictionary, use the 'private' attribute:\nmydict.private.foo = 1\n\nOnce the attribute is set, it may be accessed directly as an attribute\nof the dictionary:\nfoo = mydict.foo\n\nSome attributes of PDF pages are \"inheritable.\"  That is, they may\nbelong to a parent dictionary (or a parent of a parent dictionary, etc.)\nThe \"inheritable\" attribute allows for easy discovery of these:\nmediabox = mypage.inheritable.MediaBox\n\n\n5.2.6\u00a0\u00a0\u00a0Proxy objects\nThe objects/pdfindirect.py\nmodule contains the PdfIndirect class, which is a non-transparent proxy\nobject for PDF objects that have not yet been read in and resolved from\na file. Although these are non-transparent inside the library, client code\nshould never see one of these -- they exist inside the PdfArray and PdfDict\ncontainer types, but are resolved before being returned to a client of\nthose types.\n\n5.3\u00a0\u00a0\u00a0File reading, tokenization and parsing\npdfreader.py\ncontains the PdfReader class, which can read a PDF file (or be passed a\nfile object or already read string) and parse it. It uses the PdfTokens\nclass in tokens.py  for low-level tokenization.\nThe PdfReader class does not, in general, parse into containers (e.g.\ninside the content streams). There is a proof of concept for doing that\ninside the examples/rl2 subdirectory, but that is slow and not well-developed,\nand not useful for most applications.\nAn instance of the PdfReader class is an instance of a PdfDict -- the\ntrailer dictionary of the PDF file, to be exact.  It will have a private\nattribute set on it that is named 'pages' that is a list containing all\nthe pages in the file.\nWhen instantiating a PdfReader object, there are options available\nfor decompressing all the objects in the file.  pdfrw does not currently\nhave very many options for decompression, so this is not all that useful,\nexcept in the specific case of compressed object streams.\nAlso, there are no options for decryption yet.  If you have PDF files\nthat are encrypted or heavily compressed, you may find that using another\nprogram like pdftk on them can make them readable by pdfrw.\nIn general, the objects are read from the file lazily, but this is not\ncurrently true with compressed object streams -- all of these are decompressed\nand read in when the PdfReader is instantiated.\n\n5.4\u00a0\u00a0\u00a0File output\npdfwriter.py\ncontains the PdfWriter class, which can create and output a PDF file.\nThere are a few options available when creating and using this class.\nIn the simplest case, an instance of PdfWriter is instantiated, and\nthen pages are added to it from one or more source files (or created\nprogrammatically), and then the write method is called to dump the\nresults out to a file.\nIf you have a source PDF and do not want to disturb the structure\nof it too badly, then you may pass its trailer directly to PdfWriter\nrather than letting PdfWriter construct one for you.  There is an\nexample of this (alter.py) in the examples directory.\n\n5.5\u00a0\u00a0\u00a0Advanced features\nbuildxobj.py\ncontains functions to build Form XObjects out of pages or rectangles on\npages.  These may be reused in new PDFs essentially as if they were images.\nbuildxobj is careful to cache any page used so that it only appears in\nthe output once.\ntoreportlab.py\nprovides the makerl function, which will translate pdfrw objects into a\nformat which can be used with reportlab.\nIt is normally used in conjunction with buildxobj, to be able to reuse\nparts of existing PDFs when using reportlab.\npagemerge.py builds on the foundation laid by buildxobj.  It\ncontains classes to create a new page (or overlay an existing page)\nusing one or more rectangles from other pages.  There are examples\nshowing its use for watermarking, scaling, 4-up output, splitting\neach page in 2, etc.\nfindobjs.py contains code that can find specific kinds of objects\ninside a PDF file.  The extract.py example uses this module to create\na new PDF that places each image and Form XObject from a source PDF onto\nits own page, e.g. for easy reuse with some of the other examples or\nwith reportlab.\n\n5.6\u00a0\u00a0\u00a0Miscellaneous\ncompress.py and uncompress.py\ncontains compression and decompression functions. Very few filters are\ncurrently supported, so an external tool like pdftk might be good if you\nrequire the ability to decompress (or, for that matter, decrypt) PDF\nfiles.\npy23_diffs.py contains code to help manage the differences between\nPython 2 and Python 3.\n\n6\u00a0\u00a0\u00a0Testing\nThe tests associated with pdfrw require a large number of PDFs,\nwhich are not distributed with the library.\nTo run the tests:\n\nDownload or clone the full package from github.com/pmaupin/pdfrw\ncd into the tests directory, and then clone the package\ngithub.com/pmaupin/static_pdfs into a subdirectory (also named\nstatic_pdfs).\nNow the tests may be run from tests directory using unittest, or\npy.test, or nose.\ntravisci is used at github, and runs the tests with py.test\n\nTo run a single test-case:\n\n7\u00a0\u00a0\u00a0Other libraries\n\n7.1\u00a0\u00a0\u00a0Pure Python\n\nreportlab\n\nreportlab is must-have software if you want to programmatically\ngenerate arbitrary PDFs.\n\n\npyPdf\n\npyPdf is, in some ways, very full-featured. It can do decompression\nand decryption and seems to know a lot about items inside at least\nsome kinds of PDF files. In comparison, pdfrw knows less about\nspecific PDF file features (such as metadata), but focuses on trying\nto have a more Pythonic API for mapping the PDF file container\nsyntax to Python, and (IMO) has a simpler and better PDF file\nparser.  The Form XObject capability of pdfrw means that, in many\ncases, it does not actually need to decompress objects -- they\ncan be left compressed.\n\n\npdftools\n\npdftools feels large and I fell asleep trying to figure out how it\nall fit together, but many others have done useful things with it.\n\n\npagecatcher\n\nMy understanding is that pagecatcher would have done exactly what I\nwanted when I built pdfrw. But I was on a zero budget, so I've never\nhad the pleasure of experiencing pagecatcher. I do, however, use and\nlike reportlab (open source, from\nthe people who make pagecatcher) so I'm sure pagecatcher is great,\nbetter documented and much more full-featured than pdfrw.\n\n\npdfminer\n\nThis looks like a useful, actively-developed program. It is quite\nlarge, but then, it is trying to actively comprehend a full PDF\ndocument. From the website:\n\"PDFMiner is a suite of programs that help extracting and analyzing\ntext data of PDF documents. Unlike other PDF-related tools, it\nallows to obtain the exact location of texts in a page, as well as\nother extra information such as font information or ruled lines. It\nincludes a PDF converter that can transform PDF files into other\ntext formats (such as HTML). It has an extensible PDF parser that\ncan be used for other purposes instead of text analysis.\"\n\n\n\n\n7.2\u00a0\u00a0\u00a0non-pure-Python libraries\n\npyPoppler can read PDF\nfiles.\npycairo can write PDF\nfiles.\nPyMuPDF high performance rendering\nof PDF, (Open)XPS, CBZ and EPUB\n\n\n7.3\u00a0\u00a0\u00a0Other tools\n\npdftk is a wonderful command\nline tool for basic PDF manipulation. It complements pdfrw extremely\nwell, supporting many operations such as decryption and decompression\nthat pdfrw cannot do.\nMuPDF is a free top performance PDF, (Open)XPS, CBZ and EPUB rendering library\nthat also comes with some command line tools. One of those, mutool, has big overlaps with pdftk's -\nexcept it is up to 10 times faster.\n\n\n8\u00a0\u00a0\u00a0Release information\nRevisions:\n0.4 -- Released 18 September, 2017\n\n\nPython 3.6 added to test matrix\nProper unicode support for text strings in PDFs added\nbuildxobj fixes allow better support creating form XObjects\nout of compressed pages in some cases\nCompression fixes for Python 3+\nNew subset_booklets.py example\nBug with non-compressed indices into compressed object streams fixed\nBug with distinguishing compressed object stream first objects fixed\nBetter error reporting added for some invalid PDFs (e.g. when reading\npast the end of file)\nBetter scrubbing of old bookmark information when writing PDFs, to\nremove dangling references\nRefactoring of pdfwriter, including updating API, to allow future\nenhancements for things like incremental writing\nMinor tokenizer speedup\nSome flate decompressor bugs fixed\nCompression and decompression tests added\nTests for new unicode handling added\nPdfReader.readpages() recursion error (issue #92) fixed.\nInitial crypt filter support added\n\n\n0.3 -- Released 19 October, 2016.\n\n\nPython 3.5 added to test matrix\nBetter support under Python 3.x for in-memory PDF file-like objects\nSome pagemerge and Unicode patches added\nChanges to logging allow better coexistence with other packages\nFix for \"from pdfrw import *\"\nNew fancy_watermark.py example shows off capabilities of pagemerge.py\nmetadata.py example renamed to cat.py\n\n\n0.2 -- Released 21 June, 2015.  Supports Python 2.6, 2.7, 3.3, and 3.4.\n\n\nSeveral bugs have been fixed\nNew regression test functionally tests core with dozens of\nPDFs, and also tests examples.\nCore has been ported and tested on Python3 by round-tripping\nseveral difficult files and observing binary matching results\nacross the different Python versions.\nStill only minimal support for compression and no support\nfor encryption or newer PDF features.  (pdftk is useful\nto put PDFs in a form that pdfrw can use.)\n\n\n0.1 -- Released to PyPI in 2012.  Supports Python 2.5 - 2.7\n\n\n", "description": "Read and write PDF files in pure Python with support for subsets, merging, rotating, etc."}, {"name": "pdfplumber", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npdfplumber\nTable of Contents\nInstallation\nCommand line interface\nBasic example\nOptions\nPython library\nBasic example\nLoading a PDF\nThe pdfplumber.PDF class\nThe pdfplumber.Page class\nObjects\nchar properties\nline properties\nrect properties\ncurve properties\nDerived properties\nimage properties\nObtaining higher-level layout objects via pdfminer.six\nVisual debugging\nCreating a PageImage with .to_image()\nBasic PageImage methods\nDrawing methods\nTroubleshooting ImageMagick on Debian-based systems\nExtracting text\nExtracting tables\nTable-extraction methods\nTable-extraction settings\nTable-extraction strategies\nNotes\nExtracting form values\nDemonstrations\nComparison to other libraries\nSpecific comparisons\nAcknowledgments / Contributors\nContributing\n\n\n\n\n\nREADME.md\n\n\n\n\npdfplumber\n   \nPlumb a PDF for detailed information about each text character, rectangle, and line. Plus: Table extraction and visual debugging.\nWorks best on machine-generated, rather than scanned, PDFs. Built on pdfminer.six.\nCurrently tested on Python 3.8, 3.9, 3.10, 3.11.\nTranslations of this document are available in: Chinese (by @hbh112233abc).\nTo report a bug or request a feature, please file an issue. To ask a question or request assistance with a specific PDF, please use the discussions forum.\n\n\ud83d\udc4b This repository\u2019s maintainers are available to hire for PDF data-extraction consulting projects. To get a cost estimate, contact Jeremy (for projects of any size or complexity) and/or Samkit (specifically for table extraction).\n\nTable of Contents\n\nInstallation\nCommand line interface\nPython library\nVisual debugging\nExtracting text\nExtracting tables\nExtracting form values\nDemonstrations\nComparison to other libraries\nAcknowledgments / Contributors\nContributing\n\nInstallation\npip install pdfplumber\nCommand line interface\nBasic example\ncurl \"https://raw.githubusercontent.com/jsvine/pdfplumber/stable/examples/pdfs/background-checks.pdf\" > background-checks.pdf\npdfplumber < background-checks.pdf > background-checks.csv\nThe output will be a CSV containing info about every character, line, and rectangle in the PDF.\nOptions\n\n\n\nArgument\nDescription\n\n\n\n\n--format [format]\ncsv or json. The json format returns more information; it includes PDF-level and page-level metadata, plus dictionary-nested attributes.\n\n\n--pages [list of pages]\nA space-delimited, 1-indexed list of pages or hyphenated page ranges. E.g., 1, 11-15, which would return data for pages 1, 11, 12, 13, 14, and 15.\n\n\n--types [list of object types to extract]\nChoices are char, rect, line, curve, image, annot, et cetera. Defaults to all available.\n\n\n--laparams\nA JSON-formatted string (e.g., '{\"detect_vertical\": true}') to pass to pdfplumber.open(..., laparams=...).\n\n\n--precision [integer]\nThe number of decimal places to round floating-point numbers. Defaults to no rounding.\n\n\n\nPython library\nBasic example\nimport pdfplumber\n\nwith pdfplumber.open(\"path/to/file.pdf\") as pdf:\n    first_page = pdf.pages[0]\n    print(first_page.chars[0])\nLoading a PDF\nTo start working with a PDF, call pdfplumber.open(x), where x can be a:\n\npath to your PDF file\nfile object, loaded as bytes\nfile-like object, loaded as bytes\n\nThe open method returns an instance of the pdfplumber.PDF class.\nTo load a password-protected PDF, pass the password keyword argument, e.g., pdfplumber.open(\"file.pdf\", password = \"test\").\nTo set layout analysis parameters to pdfminer.six's layout engine, pass the laparams keyword argument, e.g., pdfplumber.open(\"file.pdf\", laparams = { \"line_overlap\": 0.7 }).\nInvalid metadata values are treated as a warning by default. If that is not intended, pass strict_metadata=True to the open method and pdfplumber.open will raise an exception if it is unable to parse the metadata.\nThe pdfplumber.PDF class\nThe top-level pdfplumber.PDF class represents a single PDF and has two main properties:\n\n\n\nProperty\nDescription\n\n\n\n\n.metadata\nA dictionary of metadata key/value pairs, drawn from the PDF's Info trailers. Typically includes \"CreationDate,\" \"ModDate,\" \"Producer,\" et cetera.\n\n\n.pages\nA list containing one pdfplumber.Page instance per page loaded.\n\n\n\n... and also has the following method:\n\n\n\nMethod\nDescription\n\n\n\n\n.close()\nBy default, Page objects cache their layout and object information to avoid having to reprocess it. When parsing large PDFs, however, these cached properties can require a lot of memory. You can use this method to flush the cache and release the memory. (In version <= 0.5.25, use .flush_cache().)\n\n\n\nThe pdfplumber.Page class\nThe pdfplumber.Page class is at the core of pdfplumber. Most things you'll do with pdfplumber will revolve around this class. It has these main properties:\n\n\n\nProperty\nDescription\n\n\n\n\n.page_number\nThe sequential page number, starting with 1 for the first page, 2 for the second, and so on.\n\n\n.width\nThe page's width.\n\n\n.height\nThe page's height.\n\n\n.objects / .chars / .lines / .rects / .curves / .images\nEach of these properties is a list, and each list contains one dictionary for each such object embedded on the page. For more detail, see \"Objects\" below.\n\n\n\n... and these main methods:\n\n\n\nMethod\nDescription\n\n\n\n\n.crop(bounding_box, relative=False, strict=True)\nReturns a version of the page cropped to the bounding box, which should be expressed as 4-tuple with the values (x0, top, x1, bottom). Cropped pages retain objects that fall at least partly within the bounding box. If an object falls only partly within the box, its dimensions are sliced to fit the bounding box. If relative=True, the bounding box is calculated as an offset from the top-left of the page's bounding box, rather than an absolute positioning. (See Issue #245 for a visual example and explanation.) When strict=True (the default), the crop's bounding box must fall entirely within the page's bounding box.\n\n\n.within_bbox(bounding_box, relative=False, strict=True)\nSimilar to .crop, but only retains objects that fall entirely within the bounding box.\n\n\n.outside_bbox(bounding_box, relative=False, strict=True)\nSimilar to .crop and .within_bbox, but only retains objects that fall entirely outside the bounding box.\n\n\n.filter(test_function)\nReturns a version of the page with only the .objects for which test_function(obj) returns True.\n\n\n\nAdditional methods are described in the sections below:\n\nVisual debugging\nExtracting text\nExtracting tables\n\nObjects\nEach instance of pdfplumber.PDF and pdfplumber.Page provides access to several types of PDF objects, all derived from pdfminer.six PDF parsing. The following properties each return a Python list of the matching objects:\n\n.chars, each representing a single text character.\n.lines, each representing a single 1-dimensional line.\n.rects, each representing a single 2-dimensional rectangle.\n.curves, each representing any series of connected points that pdfminer.six does not recognize as a line or rectangle.\n.images, each representing an image.\n.annots, each representing a single PDF annotation (cf. Section 8.4 of the official PDF specification for details)\n.hyperlinks, each representing a single PDF annotation of the subtype Link and having an URI action attribute\n\nEach object is represented as a simple Python dict, with the following properties:\nchar properties\n\n\n\nProperty\nDescription\n\n\n\n\npage_number\nPage number on which this character was found.\n\n\ntext\nE.g., \"z\", or \"Z\" or \" \".\n\n\nfontname\nName of the character's font face.\n\n\nsize\nFont size.\n\n\nadv\nEqual to text width * the font size * scaling factor.\n\n\nupright\nWhether the character is upright.\n\n\nheight\nHeight of the character.\n\n\nwidth\nWidth of the character.\n\n\nx0\nDistance of left side of character from left side of page.\n\n\nx1\nDistance of right side of character from left side of page.\n\n\ny0\nDistance of bottom of character from bottom of page.\n\n\ny1\nDistance of top of character from bottom of page.\n\n\ntop\nDistance of top of character from top of page.\n\n\nbottom\nDistance of bottom of the character from top of page.\n\n\ndoctop\nDistance of top of character from top of document.\n\n\nmatrix\nThe \"current transformation matrix\" for this character. (See below for details.)\n\n\nncs\nTKTK\n\n\nstroking_pattern\nTKTK\n\n\nnon_stroking_pattern\nTKTK\n\n\nstroking_color\nThe color of the character's outline (i.e., stroke). See docs/colors.md for details.\n\n\nnon_stroking_color\nThe character's interior color. See docs/colors.md for details.\n\n\nobject_type\n\"char\"\n\n\n\nNote: A character\u2019s matrix property represents the \u201ccurrent transformation matrix,\u201d as described in Section 4.2.2 of the PDF Reference (6th Ed.). The matrix controls the character\u2019s scale, skew, and positional translation. Rotation is a combination of scale and skew, but in most cases can be considered equal to the x-axis skew. The pdfplumber.ctm submodule defines a class, CTM, that assists with these calculations. For instance:\nfrom pdfplumber.ctm import CTM\nmy_char = pdf.pages[0].chars[3]\nmy_char_ctm = CTM(*my_char[\"matrix\"])\nmy_char_rotation = my_char_ctm.skew_x\nline properties\n\n\n\nProperty\nDescription\n\n\n\n\npage_number\nPage number on which this line was found.\n\n\nheight\nHeight of line.\n\n\nwidth\nWidth of line.\n\n\nx0\nDistance of left-side extremity from left side of page.\n\n\nx1\nDistance of right-side extremity from left side of page.\n\n\ny0\nDistance of bottom extremity from bottom of page.\n\n\ny1\nDistance of top extremity bottom of page.\n\n\ntop\nDistance of top of line from top of page.\n\n\nbottom\nDistance of bottom of the line from top of page.\n\n\ndoctop\nDistance of top of line from top of document.\n\n\nlinewidth\nThickness of line.\n\n\nstroking_color\nThe color of the line. See docs/colors.md for details.\n\n\nnon_stroking_color\nThe non-stroking color specified for the line\u2019s path. See docs/colors.md for details.\n\n\nobject_type\n\"line\"\n\n\n\nrect properties\n\n\n\nProperty\nDescription\n\n\n\n\npage_number\nPage number on which this rectangle was found.\n\n\nheight\nHeight of rectangle.\n\n\nwidth\nWidth of rectangle.\n\n\nx0\nDistance of left side of rectangle from left side of page.\n\n\nx1\nDistance of right side of rectangle from left side of page.\n\n\ny0\nDistance of bottom of rectangle from bottom of page.\n\n\ny1\nDistance of top of rectangle from bottom of page.\n\n\ntop\nDistance of top of rectangle from top of page.\n\n\nbottom\nDistance of bottom of the rectangle from top of page.\n\n\ndoctop\nDistance of top of rectangle from top of document.\n\n\nlinewidth\nThickness of line.\n\n\nstroking_color\nThe color of the rectangle's outline. See docs/colors.md for details.\n\n\nnon_stroking_color\nThe rectangle\u2019s fill color. See docs/colors.md for details.\n\n\nobject_type\n\"rect\"\n\n\n\ncurve properties\n\n\n\nProperty\nDescription\n\n\n\n\npage_number\nPage number on which this curve was found.\n\n\npts\nPoints \u2014\u00a0as a list of (x, top) tuples \u2014\u00a0describing the curve.\n\n\nheight\nHeight of curve's bounding box.\n\n\nwidth\nWidth of curve's bounding box.\n\n\nx0\nDistance of curve's left-most point from left side of page.\n\n\nx1\nDistance of curve's right-most point from left side of the page.\n\n\ny0\nDistance of curve's lowest point from bottom of page.\n\n\ny1\nDistance of curve's highest point from bottom of page.\n\n\ntop\nDistance of curve's highest point from top of page.\n\n\nbottom\nDistance of curve's lowest point from top of page.\n\n\ndoctop\nDistance of curve's highest point from top of document.\n\n\nlinewidth\nThickness of line.\n\n\nfill\nWhether the shape defined by the curve's path is filled.\n\n\nstroking_color\nThe color of the curve's outline. See docs/colors.md for details.\n\n\nnon_stroking_color\nThe curve\u2019s fill color. See docs/colors.md for details.\n\n\nobject_type\n\"curve\"\n\n\n\nDerived properties\nAdditionally, both pdfplumber.PDF and pdfplumber.Page provide access to several derived lists of objects: .rect_edges (which decomposes each rectangle into its four lines), .curve_edges (which does the same for curve objects), and .edges (which combines .rect_edges, .curve_edges, and .lines).\nimage properties\n[To be completed.]\nObtaining higher-level layout objects via pdfminer.six\nIf you pass the pdfminer.six-handling laparams parameter to pdfplumber.open(...), then each page's .objects dictionary will also contain pdfminer.six's higher-level layout objects, such as \"textboxhorizontal\".\nVisual debugging\npdfplumber's visual debugging tools can be helpful in understanding the structure of a PDF and the objects that have been extracted from it.\nCreating a PageImage with .to_image()\nTo turn any page (including cropped pages) into an PageImage object, call my_page.to_image(). You can optionally pass one of the  following keyword arguments:\n\nresolution: The desired number pixels per inch. Default: 72. Type: int.\nwidth: The desired image width in pixels. Default: unset, determined by resolution. Type: int.\nheight: The desired image width in pixels. Default: unset, determined by resolution. Type: int.\nantialias: Whether to use antialiasing when creating the image. Setting to True creates images with less-jagged text and graphics, but with larger file sizes. Default: False. Type: bool.\n\nFor instance:\nim = my_pdf.pages[0].to_image(resolution=150)\nFrom a script or REPL, im.show() will open the image in your local image viewer. But PageImage objects also play nicely with Jupyter notebooks; they automatically render as cell outputs. For example:\n\nNote: .to_image(...) works as expected with Page.crop(...)/CroppedPage instances, but is unable to incorporate changes made via Page.filter(...)/FilteredPage instances.\nBasic PageImage methods\n\n\n\nMethod\nDescription\n\n\n\n\nim.reset()\nClears anything you've drawn so far.\n\n\nim.copy()\nCopies the image to a new PageImage object.\n\n\nim.show()\nOpens the image in your local image viewer.\n\n\nim.save(path_or_fileobject, format=\"PNG\", quantize=True, colors=256, bits=8)\nSaves the annotated image as a PNG file. The default arguments quantize the image to a palette of 256 colors, saving the PNG with 8-bit color depth. You can disable quantization by passing quantize=False or adjust the size of the color palette by passing colors=N.\n\n\n\nDrawing methods\nYou can pass explicit coordinates or any pdfplumber PDF object (e.g., char, line, rect) to these methods.\n\n\n\nSingle-object method\nBulk method\nDescription\n\n\n\n\nim.draw_line(line, stroke={color}, stroke_width=1)\nim.draw_lines(list_of_lines, **kwargs)\nDraws a line from a line, curve, or a 2-tuple of 2-tuples (e.g., ((x, y), (x, y))).\n\n\nim.draw_vline(location, stroke={color}, stroke_width=1)\nim.draw_vlines(list_of_locations, **kwargs)\nDraws a vertical line at the x-coordinate indicated by location.\n\n\nim.draw_hline(location, stroke={color}, stroke_width=1)\nim.draw_hlines(list_of_locations, **kwargs)\nDraws a horizontal line at the y-coordinate indicated by location.\n\n\nim.draw_rect(bbox_or_obj, fill={color}, stroke={color}, stroke_width=1)\nim.draw_rects(list_of_rects, **kwargs)\nDraws a rectangle from a rect, char, etc., or 4-tuple bounding box.\n\n\nim.draw_circle(center_or_obj, radius=5, fill={color}, stroke={color})\nim.draw_circles(list_of_circles, **kwargs)\nDraws a circle at (x, y) coordinate or at the center of a char, rect, etc.\n\n\n\nNote: The methods above are built on Pillow's ImageDraw methods, but the parameters have been tweaked for consistency with SVG's fill/stroke/stroke_width nomenclature.\nTroubleshooting ImageMagick on Debian-based systems\nIf you're using pdfplumber on a Debian-based system and encounter a PolicyError, you may be able to fix it by changing the following line in /etc/ImageMagick-6/policy.xml from this:\n<policy domain=\"coder\" rights=\"none\" pattern=\"PDF\" />\n... to this:\n<policy domain=\"coder\" rights=\"read|write\" pattern=\"PDF\" />\n(More details about policy.xml available here.)\nExtracting text\npdfplumber can extract text from any given page (including cropped and derived pages). It can also attempt to preserve the layout of that text, as well as to identify the coordinates of words and search queries. Page objects can call the following text-extraction methods:\n\n\n\nMethod\nDescription\n\n\n\n\n.extract_text(x_tolerance=3, y_tolerance=3, layout=False, x_density=7.25, y_density=13, **kwargs)\nCollates all of the page's character objects into a single string.When layout=False: Adds spaces where the difference between the x1 of one character and the x0 of the next is greater than x_tolerance. Adds newline characters where the difference between the doctop of one character and the doctop of the next is greater than y_tolerance.When layout=True (experimental feature): Attempts to mimic the structural layout of the text on the page(s), using x_density and y_density to determine the minimum number of characters/newlines per \"point,\" the PDF unit of measurement. All remaining **kwargs are passed to .extract_words(...) (see below), the first step in calculating the layout.\n\n\n.extract_text_simple(x_tolerance=3, y_tolerance=3)\nA slightly faster but less flexible version of .extract_text(...), using a simpler logic.\n\n\n.extract_words(x_tolerance=3, y_tolerance=3, keep_blank_chars=False, use_text_flow=False, horizontal_ltr=True, vertical_ttb=True, extra_attrs=[], split_at_punctuation=False, expand_ligatures=True)\nReturns a list of all word-looking things and their bounding boxes. Words are considered to be sequences of characters where (for \"upright\" characters) the difference between the x1 of one character and the x0 of the next is less than or equal to x_tolerance and where the doctop of one character and the doctop of the next is less than or equal to y_tolerance. A similar approach is taken for non-upright characters, but instead measuring the vertical, rather than horizontal, distances between them. The parameters horizontal_ltr and vertical_ttb indicate whether the words should be read from left-to-right (for horizontal words) / top-to-bottom (for vertical words). Changing keep_blank_chars to True will mean that blank characters are treated as part of a word, not as a space between words. Changing use_text_flow to True will use the PDF's underlying flow of characters as a guide for ordering and segmenting the words, rather than presorting the characters by x/y position. (This mimics how dragging a cursor highlights text in a PDF; as with that, the order does not always appear to be logical.) Passing a list of extra_attrs  (e.g., [\"fontname\", \"size\"] will restrict each words to characters that share exactly the same value for each of those attributes, and the resulting word dicts will indicate those attributes. Setting split_at_punctuation to True will enforce breaking tokens at punctuations specified by string.punctuation; or you can specify the list of separating punctuation by pass a string, e.g., split_at_punctuation='!\"&'()*+,.:;<=>?@[]^`{|}~'. Unless you set expand_ligatures=False, ligatures such as \ufb01 will be expanded into their constituent letters (e.g., fi).\n\n\n.extract_text_lines(layout=False, strip=True, return_chars=True, **kwargs)\nExperimental feature that returns a list of dictionaries representing the lines of text on the page. The strip parameter works analogously to Python's str.strip() method, and returns text attributes without their surrounding whitespace. (Only relevant when layout = True.) Setting return_chars to False will exclude the individual character objects from the returned text-line dicts. The remaining **kwargs are those you would pass to .extract_text(layout=True, ...).\n\n\n.search(pattern, regex=True, case=True, main_group=0, return_groups=True, return_chars=True, layout=False, **kwargs)\nExperimental feature that allows you to search a page's text, returning a list of all instances that match the query. For each instance, the response dictionary object contains the matching text, any regex group matches, the bounding box coordinates, and the char objects themselves. pattern can be a compiled regular expression, an uncompiled regular expression, or a non-regex string. If regex is False, the pattern is treated as a non-regex string. If case is False, the search is performed in a case-insensitive manner. Setting main_group restricts the results to a specific regex group within the pattern (default of 0 means the entire match). Setting return_groups and/or return_chars to False will exclude the lists of the matched regex groups and/or characters from being added (as \"groups\" and \"chars\" to the return dicts). The layout parameter operates as it does for .extract_text(...). The remaining **kwargs are those you would pass to .extract_text(layout=True, ...). Note: Zero-width and all-whitespace matches are discarded, because they (generally) have no explicit position on the page.\n\n\n.dedupe_chars(tolerance=1)\nReturns a version of the page with duplicate chars \u2014\u00a0those sharing the same text, fontname, size, and positioning (within tolerance x/y) as other characters \u2014\u00a0removed. (See Issue #71 to understand the motivation.)\n\n\n\nExtracting tables\npdfplumber's approach to table detection borrows heavily from Anssi Nurminen's master's thesis, and is inspired by Tabula. It works like this:\n\nFor any given PDF page, find the lines that are (a) explicitly defined and/or (b) implied by the alignment of words on the page.\nMerge overlapping, or nearly-overlapping, lines.\nFind the intersections of all those lines.\nFind the most granular set of rectangles (i.e., cells) that use these intersections as their vertices.\nGroup contiguous cells into tables.\n\nTable-extraction methods\npdfplumber.Page objects can call the following table methods:\n\n\n\nMethod\nDescription\n\n\n\n\n.find_tables(table_settings={})\nReturns a list of Table objects. The Table object provides access to the .cells, .rows, and .bbox properties, as well as the .extract(x_tolerance=3, y_tolerance=3) method.\n\n\n.find_table(table_settings={})\nSimilar to .find_tables(...), but returns the largest table on the page, as a Table object. If multiple tables have the same size \u2014\u00a0as measured by the number of cells \u2014\u00a0this method returns the table closest to the top of the page.\n\n\n.extract_tables(table_settings={})\nReturns the text extracted from all tables found on the page, represented as a list of lists of lists, with the structure table -> row -> cell.\n\n\n.extract_table(table_settings={})\nReturns the text extracted from the largest table on the page (see .find_table(...) above), represented as a list of lists, with the structure row -> cell.\n\n\n.debug_tablefinder(table_settings={})\nReturns an instance of the TableFinder class, with access to the .edges, .intersections, .cells, and .tables properties.\n\n\n\nFor example:\npdf = pdfplumber.open(\"path/to/my.pdf\")\npage = pdf.pages[0]\npage.extract_table()\nClick here for a more detailed example.\nTable-extraction settings\nBy default, extract_tables uses the page's vertical and horizontal lines (or rectangle edges) as cell-separators. But the method is highly customizable via the table_settings argument. The possible settings, and their defaults:\n{\n    \"vertical_strategy\": \"lines\", \n    \"horizontal_strategy\": \"lines\",\n    \"explicit_vertical_lines\": [],\n    \"explicit_horizontal_lines\": [],\n    \"snap_tolerance\": 3,\n    \"snap_x_tolerance\": 3,\n    \"snap_y_tolerance\": 3,\n    \"join_tolerance\": 3,\n    \"join_x_tolerance\": 3,\n    \"join_y_tolerance\": 3,\n    \"edge_min_length\": 3,\n    \"min_words_vertical\": 3,\n    \"min_words_horizontal\": 1,\n    \"keep_blank_chars\": False,\n    \"text_tolerance\": 3,\n    \"text_x_tolerance\": 3,\n    \"text_y_tolerance\": 3,\n    \"intersection_tolerance\": 3,\n    \"intersection_x_tolerance\": 3,\n    \"intersection_y_tolerance\": 3,\n}\n\n\n\nSetting\nDescription\n\n\n\n\n\"vertical_strategy\"\nEither \"lines\", \"lines_strict\", \"text\", or \"explicit\". See explanation below.\n\n\n\"horizontal_strategy\"\nEither \"lines\", \"lines_strict\", \"text\", or \"explicit\". See explanation below.\n\n\n\"explicit_vertical_lines\"\nA list of vertical lines that explicitly demarcate cells in the table. Can be used in combination with any of the strategies above. Items in the list should be either numbers \u2014\u00a0indicating the x coordinate of a line the full height of the page \u2014\u00a0or line/rect/curve objects.\n\n\n\"explicit_horizontal_lines\"\nA list of horizontal lines that explicitly demarcate cells in the table. Can be used in combination with any of the strategies above. Items in the list should be either numbers \u2014\u00a0indicating the y coordinate of a line the full height of the page \u2014\u00a0or line/rect/curve objects.\n\n\n\"snap_tolerance\", \"snap_x_tolerance\", \"snap_y_tolerance\"\nParallel lines within snap_tolerance pixels will be \"snapped\" to the same horizontal or vertical position.\n\n\n\"join_tolerance\", \"join_x_tolerance\", \"join_y_tolerance\"\nLine segments on the same infinite line, and whose ends are within join_tolerance of one another, will be \"joined\" into a single line segment.\n\n\n\"edge_min_length\"\nEdges shorter than edge_min_length will be discarded before attempting to reconstruct the table.\n\n\n\"min_words_vertical\"\nWhen using \"vertical_strategy\": \"text\", at least min_words_vertical words must share the same alignment.\n\n\n\"min_words_horizontal\"\nWhen using \"horizontal_strategy\": \"text\", at least min_words_horizontal words must share the same alignment.\n\n\n\"intersection_tolerance\", \"intersection_x_tolerance\", \"intersection_y_tolerance\"\nWhen combining edges into cells, orthogonal edges must be within intersection_tolerance pixels to be considered intersecting.\n\n\n\"text_*\"\nAll settings prefixed with text_ are then used when extracting text from each discovered table. All possible arguments to Page.extract_text(...) are also valid here.\n\n\n\"text_x_tolerance\", \"text_y_tolerance\"\nThese text_-prefixed settings also apply to the table-identification algorithm when the text strategy is used. I.e., when that algorithm searches for words, it will expect the individual letters in each word to be no more than `text_[x\n\n\n\nTable-extraction strategies\nBoth vertical_strategy and horizontal_strategy accept the following options:\n\n\n\nStrategy\nDescription\n\n\n\n\n\"lines\"\nUse the page's graphical lines \u2014\u00a0including the sides of rectangle objects \u2014\u00a0as the borders of potential table-cells.\n\n\n\"lines_strict\"\nUse the page's graphical lines \u2014\u00a0but not the sides of rectangle objects \u2014\u00a0as the borders of potential table-cells.\n\n\n\"text\"\nFor vertical_strategy: Deduce the (imaginary) lines that connect the left, right, or center of words on the page, and use those lines as the borders of potential table-cells. For horizontal_strategy, the same but using the tops of words.\n\n\n\"explicit\"\nOnly use the lines explicitly defined in explicit_vertical_lines / explicit_horizontal_lines.\n\n\n\nNotes\n\n\nOften it's helpful to crop a page \u2014\u00a0Page.crop(bounding_box) \u2014\u00a0before trying to extract the table.\n\n\nTable extraction for pdfplumber was radically redesigned for v0.5.0, and introduced breaking changes.\n\n\nExtracting form values\nSometimes PDF files can contain forms that include inputs that people can fill out and save. While values in form fields appear like other text in a PDF file, form data is handled differently. If you want the gory details, see page 671 of this specification.\npdfplumber doesn't have an interface for working with form data, but you can access it using pdfplumber's wrappers around pdfminer.\nFor example, this snippet will retrieve form field names and values and store them in a dictionary.\nimport pdfplumber\nfrom pdfplumber.utils.pdfinternals import resolve_and_decode, resolve\n\npdf = pdfplumber.open(\"document_with_form.pdf\")\n\ndef parse_field_helper(form_data, field, prefix=None):\n    \"\"\" appends any PDF AcroForm field/value pairs in `field` to provided `form_data` list\n\n        if `field` has child fields, those will be parsed recursively.\n    \"\"\"\n    resolved_field = field.resolve()\n    field_name = '.'.join(filter(lambda x: x, [prefix, resolve_and_decode(resolved_field.get(\"T\"))]))\n    if \"Kids\" in resolved_field:\n        for kid_field in resolved_field[\"Kids\"]:\n            parse_field_helper(form_data, kid_field, prefix=field_name)\n    if \"T\" in resolved_field or \"TU\" in resolved_field:\n        # \"T\" is a field-name, but it's sometimes absent.\n        # \"TU\" is the \"alternate field name\" and is often more human-readable\n        # your PDF may have one, the other, or both.\n        alternate_field_name  = resolve_and_decode(resolved_field.get(\"TU\")) if resolved_field.get(\"TU\") else None\n        field_value = resolve_and_decode(resolved_field[\"V\"]) if 'V' in resolved_field else None\n        form_data.append([field_name, alternate_field_name, field_value])\n\n\nform_data = []\nfields = resolve(pdf.doc.catalog[\"AcroForm\"])[\"Fields\"]\nfor field in fields:\n    parse_field_helper(form_data, field)\nOnce you run this script, form_data is a list containing a three-element tuple for each form element. For instance, a PDF form with a city and state field might look like this.\n[\n ['STATE.0', 'enter STATE', 'CA'],\n ['section 2  accident infoRmation.1.0',\n  'enter city of accident',\n  'SAN FRANCISCO']\n]\n\nDemonstrations\n\nUsing extract_table on a California Worker Adjustment and Retraining Notification (WARN) report. Demonstrates basic visual debugging and table extraction.\nUsing extract_table on the FBI's National Instant Criminal Background Check System PDFs. Demonstrates how to use visual debugging to find optimal table extraction settings. Also demonstrates Page.crop(...) and Page.extract_text(...).\nInspecting and visualizing curve objects.\nExtracting fixed-width data from a San Jose PD firearm search report, an example of using Page.extract_text(...).\n\nComparison to other libraries\nSeveral other Python libraries help users to extract information from PDFs. As a broad overview, pdfplumber distinguishes itself from other PDF processing libraries by combining these features:\n\nEasy access to detailed information about each PDF object\nHigher-level, customizable methods for extracting text and tables\nTightly integrated visual debugging\nOther useful utility functions, such as filtering objects via a crop-box\n\nIt's also helpful to know what features pdfplumber does not provide:\n\nPDF generation\nPDF modification\nOptical character recognition (OCR)\nStrong support for extracting tables from OCR'ed documents\n\nSpecific comparisons\n\n\npdfminer.six provides the foundation for pdfplumber. It primarily focuses on parsing PDFs, analyzing PDF layouts and object positioning, and extracting text. It does not provide tools for table extraction or visual debugging.\n\n\nPyPDF2 is a pure-Python library \"capable of splitting, merging, cropping, and transforming the pages of PDF files. It can also add custom data, viewing options, and passwords to PDF files.\" It can extract page text, but does not provide easy access to shape objects (rectangles, lines, etc.), table-extraction, or visually debugging tools.\n\n\npymupdf is substantially faster than pdfminer.six (and thus also pdfplumber) and can generate and modify PDFs, but the library requires installation of non-Python software (MuPDF). It also does not enable easy access to shape objects (rectangles, lines, etc.), and does not provide table-extraction or visual debugging tools.\n\n\ncamelot, tabula-py, and pdftables all focus primarily on extracting tables. In some cases, they may be better suited to the particular tables you are trying to extract.\n\n\nAcknowledgments / Contributors\nMany thanks to the following users who've contributed ideas, features, and fixes:\n\nJacob Fenton\nDan Nguyen\nJeff Barrera\nBob Lannon\nDustin Tindall\n@yevgnen\n@meldonization\nOis\u00edn Moran\nSamkit Jain\nFrancisco Aranda\nKwok-kuen Cheung\nMarco\nIdan David\n@xv44586\nAlexander Regueiro\nDaniel Pe\u00f1a\n@bobluda\n@ramcdona\n@johnhuge\nJhonatan Lopes\nEthan Corey\nShannon Shen\nMatsumoto Toshi\nJohn West\nJeremy B. Merrill\n\nContributing\nPull requests are welcome, but please submit a proposal issue first, as the library is in active development.\nCurrent maintainers:\n\nJeremy Singer-Vine\nSamkit Jain\n\n\n\n", "description": "Extract text and tables from PDFs and provide visual debugging tools."}, {"name": "pdfminer.six", "readme": "\n\n\n\n\n\n\n\n\n\n\n\npdfminer.six\nFeatures\nHow to use\nContributing\nAcknowledgement\n\n\n\n\n\nREADME.md\n\n\n\n\npdfminer.six\n\n\n\nWe fathom PDF\nPdfminer.six is a community maintained fork of the original PDFMiner. It is a tool for extracting information from PDF\ndocuments. It focuses on getting and analyzing text data. Pdfminer.six extracts the text from a page directly from the\nsourcecode of the PDF. It can also be used to get the exact location, font or color of the text.\nIt is built in a modular way such that each component of pdfminer.six can be replaced easily. You can implement your own\ninterpreter or rendering device that uses the power of pdfminer.six for other purposes than text analysis.\nCheck out the full documentation on\nRead the Docs.\nFeatures\n\nWritten entirely in Python.\nParse, analyze, and convert PDF documents.\nExtract content as text, images, html or hOCR.\nPDF-1.7 specification support. (well, almost).\nCJK languages and vertical writing scripts support.\nVarious font types (Type1, TrueType, Type3, and CID) support.\nSupport for extracting images (JPG, JBIG2, Bitmaps).\nSupport for various compressions (ASCIIHexDecode, ASCII85Decode, LZWDecode, FlateDecode, RunLengthDecode,\nCCITTFaxDecode)\nSupport for RC4 and AES encryption.\nSupport for AcroForm interactive form extraction.\nTable of contents extraction.\nTagged contents extraction.\nAutomatic layout analysis.\n\nHow to use\n\n\nInstall Python 3.6 or newer.\n\n\nInstall pdfminer.six.\npip install pdfminer.six\n\n\n(Optionally) install extra dependencies for extracting images.\npip install 'pdfminer.six[image]'\n\n\nUse the command-line interface to extract text from pdf.\npdf2txt.py example.pdf\n\n\nOr use it with Python.\n\n\nfrom pdfminer.high_level import extract_text\n\ntext = extract_text(\"example.pdf\")\nprint(text)\nContributing\nBe sure to read the contribution guidelines.\nAcknowledgement\nThis repository includes code from pyHanko ; the original license has been included here.\n\n\n", "description": "Extract text and layout information from PDF documents."}, {"name": "pdfkit", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nPython-PDFKit: HTML to PDF wrapper\nInstallation\nUsage\nConfiguration\nTroubleshooting\nDebugging issues with PDF generation\nCommon errors:\n\n\n\n\n\nREADME.rst\n\n\n\n\nPython-PDFKit: HTML to PDF wrapper\n\n\n\nPython 2 and 3 wrapper for wkhtmltopdf utility to convert HTML to PDF using Webkit.\nThis is adapted version of ruby PDFKit library, so big thanks to them!\n\nInstallation\n\nInstall python-pdfkit:\n\n$ pip install pdfkit  (or pip3 for python3)\n\nInstall wkhtmltopdf:\n\n\nDebian/Ubuntu:\n\n$ sudo apt-get install wkhtmltopdf\n\nmacOS:\n\n$ brew install homebrew/cask/wkhtmltopdf\nWarning! Version in debian/ubuntu repos have reduced functionality (because it compiled without the wkhtmltopdf QT patches), such as adding outlines, headers, footers, TOC etc. To use this options you should install static binary from wkhtmltopdf site or you can use this script (written for CI servers with Ubuntu 18.04 Bionic, but it could work on other Ubuntu/Debian versions).\n\nWindows and other options: check wkhtmltopdf homepage for binary installers\n\n\nUsage\nFor simple tasks:\nimport pdfkit\n\npdfkit.from_url('http://google.com', 'out.pdf')\npdfkit.from_file('test.html', 'out.pdf')\npdfkit.from_string('Hello!', 'out.pdf')\nYou can pass a list with multiple URLs or files:\npdfkit.from_url(['google.com', 'yandex.ru', 'engadget.com'], 'out.pdf')\npdfkit.from_file(['file1.html', 'file2.html'], 'out.pdf')\nAlso you can pass an opened file:\nwith open('file.html') as f:\n    pdfkit.from_file(f, 'out.pdf')\nIf you wish to further process generated PDF, you can read it to a variable:\n# Without output_path, PDF is returned for assigning to a variable\npdf = pdfkit.from_url('http://google.com')\nYou can specify all wkhtmltopdf options. You can drop '--' in option name. If option without value, use None, False or '' for dict value:. For repeatable options (incl. allow, cookie, custom-header, post, postfile, run-script, replace) you may use a list or a tuple. With option that need multiple values (e.g. --custom-header Authorization secret) we may use a 2-tuple (see example below).\noptions = {\n    'page-size': 'Letter',\n    'margin-top': '0.75in',\n    'margin-right': '0.75in',\n    'margin-bottom': '0.75in',\n    'margin-left': '0.75in',\n    'encoding': \"UTF-8\",\n    'custom-header': [\n        ('Accept-Encoding', 'gzip')\n    ],\n    'cookie': [\n        ('cookie-empty-value', '\"\"')\n        ('cookie-name1', 'cookie-value1'),\n        ('cookie-name2', 'cookie-value2'),\n    ],\n    'no-outline': None\n}\n\npdfkit.from_url('http://google.com', 'out.pdf', options=options)\nBy default, PDFKit will run wkhtmltopdf with quiet option turned on, since in most cases output is not needed and can cause excessive memory usage and corrupted results. If need to get wkhtmltopdf output you should pass verbose=True to API calls:\npdfkit.from_url('google.com', 'out.pdf', verbose=True)\nDue to wkhtmltopdf command syntax, TOC and Cover options must be specified separately. If you need cover before TOC, use cover_first option:\ntoc = {\n    'xsl-style-sheet': 'toc.xsl'\n}\n\ncover = 'cover.html'\n\npdfkit.from_file('file.html', options=options, toc=toc, cover=cover)\npdfkit.from_file('file.html', options=options, toc=toc, cover=cover, cover_first=True)\nYou can specify external CSS files when converting files or strings using css option.\nWarning This is a workaround for this bug in wkhtmltopdf. You should try --user-style-sheet option first.\n# Single CSS file\ncss = 'example.css'\npdfkit.from_file('file.html', options=options, css=css)\n\n# Multiple CSS files\ncss = ['example.css', 'example2.css']\npdfkit.from_file('file.html', options=options, css=css)\nYou can also pass any options through meta tags in your HTML:\nbody = \"\"\"\n    <html>\n      <head>\n        <meta name=\"pdfkit-page-size\" content=\"Legal\"/>\n        <meta name=\"pdfkit-orientation\" content=\"Landscape\"/>\n      </head>\n      Hello World!\n      </html>\n    \"\"\"\n\npdfkit.from_string(body, 'out.pdf') #with --page-size=Legal and --orientation=Landscape\n\nConfiguration\nEach API call takes an optional configuration parameter. This should be an instance of pdfkit.configuration() API call. It takes the configuration options as initial parameters. The available options are:\n\nwkhtmltopdf - the location of the wkhtmltopdf binary. By default pdfkit will attempt to locate this using which (on UNIX type systems) or where (on Windows).\nmeta_tag_prefix - the prefix for pdfkit specific meta tags - by default this is pdfkit-\n\nExample - for when wkhtmltopdf is not on $PATH:\nconfig = pdfkit.configuration(wkhtmltopdf='/opt/bin/wkhtmltopdf')\npdfkit.from_string(html_string, output_file, configuration=config)\nAlso you can use configuration() call to check if wkhtmltopdf is present in $PATH:\ntry:\n  config = pdfkit.configuration()\n  pdfkit.from_string(html_string, output_file)\nexcept OSError:\n  #not present in PATH\n\nTroubleshooting\n\nDebugging issues with PDF generation\nIf you struggling to generate correct PDF firstly you should check wkhtmltopdf output for some clues, you can get it by passing verbose=True to API calls:\npdfkit.from_url('http://google.com', 'out.pdf', verbose=True)\nIf you are getting strange results in PDF or some option looks like its ignored you should try to run wkhtmltopdf directly to see if it produces the same result. You can get CLI command by creating pdfkit.PDFKit class directly and then calling its command() method:\nimport pdfkit\n\nr = pdfkit.PDFKit('html', 'string', verbose=True)\nprint(' '.join(r.command()))\n# try running wkhtmltopdf to create PDF\noutput = r.to_pdf()\n\nCommon errors:\n\nIOError: 'No wkhtmltopdf executable found':\nMake sure that you have wkhtmltopdf in your $PATH or set via custom configuration (see preceding section). where wkhtmltopdf in Windows or which wkhtmltopdf on Linux should return actual path to binary.\n\nIOError: 'Command Failed'\nThis error means that PDFKit was unable to process an input. You can try to directly run a command from error message and see what error caused failure (on some wkhtmltopdf versions this can be cause by segmentation faults)\n\n\n\n\n", "description": "HTML to PDF converter that uses wkhtmltopdf."}, {"name": "pdf2image", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npdf2image\nHow to install\nWindows\nMac\nLinux\nPlatform-independant (Using conda)\nHow does it work?\nWhat's new?\nPerformance tips\nLimitations / known issues\n\n\n\n\n\nREADME.md\n\n\n\n\npdf2image\n    \nA python (3.7+) module that wraps pdftoppm and pdftocairo to convert PDF to a PIL Image object\nHow to install\npip install pdf2image\nWindows\nWindows users will have to build or download poppler for Windows. I recommend @oschwartz10612 version which is the most up-to-date. You will then have to add the bin/ folder to PATH or use poppler_path = r\"C:\\path\\to\\poppler-xx\\bin\" as an argument in convert_from_path.\nMac\nMac users will have to install poppler.\nInstalling using Brew:\nbrew install poppler\n\nLinux\nMost distros ship with pdftoppm and pdftocairo. If they are not installed, refer to your package manager to install poppler-utils\nPlatform-independant (Using conda)\n\nInstall poppler: conda install -c conda-forge poppler\nInstall pdf2image: pip install pdf2image\n\nHow does it work?\nfrom pdf2image import convert_from_path, convert_from_bytes\nfrom pdf2image.exceptions import (\n    PDFInfoNotInstalledError,\n    PDFPageCountError,\n    PDFSyntaxError\n)\nThen simply do:\nimages = convert_from_path('/home/belval/example.pdf')\nOR\nimages = convert_from_bytes(open('/home/belval/example.pdf', 'rb').read())\nOR better yet\nimport tempfile\n\nwith tempfile.TemporaryDirectory() as path:\n    images_from_path = convert_from_path('/home/belval/example.pdf', output_folder=path)\n    # Do something here\nimages will be a list of PIL Image representing each page of the PDF document.\nHere are the definitions:\nconvert_from_path(pdf_path, dpi=200, output_folder=None, first_page=None, last_page=None, fmt='ppm', jpegopt=None, thread_count=1, userpw=None, use_cropbox=False, strict=False, transparent=False, single_file=False, output_file=str(uuid.uuid4()), poppler_path=None, grayscale=False, size=None, paths_only=False, use_pdftocairo=False, timeout=600, hide_attributes=False)\nconvert_from_bytes(pdf_file, dpi=200, output_folder=None, first_page=None, last_page=None, fmt='ppm', jpegopt=None, thread_count=1, userpw=None, use_cropbox=False, strict=False, transparent=False, single_file=False, output_file=str(uuid.uuid4()), poppler_path=None, grayscale=False, size=None, paths_only=False, use_pdftocairo=False, timeout=600, hide_attributes=False)\nWhat's new?\n\nAllow users to hide attributes when using pdftoppm with hide_attributes (Thank you @StaticRocket)\nFix console opening on Windows (Thank you @OhMyAgnes!)\nAdd timeout parameter which raises PDFPopplerTimeoutError after the given number of seconds.\nAdd use_pdftocairo parameter which forces pdf2image to use pdftocairo. Should improve performance.\nFixed a bug where using pdf2image with multiple threads (but not multiple processes) would cause and exception\njpegopt parameter allows for tuning of the output JPEG when using fmt=\"jpeg\" (-jpegopt in pdftoppm CLI) (Thank you @abieler)\npdfinfo_from_path and pdfinfo_from_bytes which expose the output of the pdfinfo CLI\npaths_only parameter will return image paths instead of Image objects, to prevent OOM when converting a big PDF\nsize parameter allows you to define the shape of the resulting images (-scale-to in pdftoppm CLI)\n\nsize=400\u00a0will fit the image to a 400x400 box, preserving aspect ratio\nsize=(400, None) will make the image 400 pixels wide, preserving aspect ratio\nsize=(500, 500) will resize the image to 500x500 pixels, not preserving aspect ratio\n\n\ngrayscale parameter allows you to convert images to grayscale (-gray in pdftoppm CLI)\nsingle_file parameter allows you to convert the first PDF page only, without adding digits at the end of the output_file\nAllow the user to specify poppler's installation path with poppler_path\n\nPerformance tips\n\nUsing an output folder is significantly faster if you are using an SSD. Otherwise i/o usually becomes the bottleneck.\nUsing multiple threads can give you some gains but avoid more than 4 as this will cause i/o bottleneck (even on my NVMe SSD!).\nIf i/o is your bottleneck, using the JPEG format can lead to significant gains.\nPNG format is pretty slow, this is because of the compression.\nIf you want to know the best settings (most settings will be fine anyway) you can clone the project and run python tests.py to get timings.\n\nLimitations / known issues\n\nA relatively big PDF will use up all your memory and cause the process to be killed (unless you use an output folder)\nSometimes fail read pdf signed using DocuSign, Solution for DocuSign issue.\n\n\n\n", "description": "Wrap pdftoppm and pdftocairo to convert PDF to PIL Image in Python."}, {"name": "patsy", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nPatsy\nDependencies\nInstallation\nLicense\n\n\n\n\n\nREADME.md\n\n\n\n\nPatsy\nNotice: patsy is no longer under active development. As of August 2021,\nMatthew Wardrop (@matthewwardrop) and Tom\u00e1s Capretto (@tomicapretto) have taken\non responsibility from Nathaniel Smith (@njsmith) for keeping the lights on, but\nno new feature development is planned. The spiritual successor of this project\nis Formulaic, and we\nrecommend those interested in new feature development contribute there. Those\nwhose use-cases continue to be met by patsy can continue using this package\nwith increased confidence that things will continue to work as is for the\nforeseeable future.\n\nPatsy is a Python library for describing statistical models\n(especially linear models, or models that have a linear component) and\nbuilding design matrices. Patsy brings the convenience of R \"formulas\" to Python.\n\n\n\n\n\n\n\n\nDocumentation: https://patsy.readthedocs.io/\nDownloads: http://pypi.python.org/pypi/patsy/\nCode and issues: https://github.com/pydata/patsy\nMailing list: pydata@googlegroups.com (http://groups.google.com/group/pydata)\n\nDependencies\n\nPython (2.6, 2.7, or 3.3+)\nsix\nnumpy\nOptional:\n\npytest/pytest-cov: needed to run tests\nscipy: needed for spline-related functions like bs\n\n\n\nInstallation\npip install patsy (or, for traditionalists: python setup.py install)\nLicense\n2-clause BSD, see LICENSE.txt for details.\n\n\n", "description": "Describe statistical models in Python inspired by R formulas."}, {"name": "pathy", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPathy: a Path interface for local and cloud bucket storage\n\ud83d\ude80 Quickstart\nSupported Clouds\nGoogle Cloud Storage\nAmazon S3\nAzure\nSemantic Versioning\n\ud83c\udf9b API\nPathy class\nexists method\nfluid classmethod\nfrom_bucket classmethod\nglob method\nis_dir method\nis_file method\niterdir method\nls method\nmkdir method\nopen method\nowner method\nrename method\nreplace method\nresolve method\nrglob method\nrmdir method\nsamefile method\nstat method\nto_local classmethod\ntouch method\nBlobStat dataclass\nuse_fs function\nget_fs_client function\nuse_fs_cache function\nget_fs_cache function\nset_client_params function\nCLI\ncp\nls\nmv\nrm\nCredits\n\n\n\n\n\nREADME.md\n\n\n\n\nPathy: a Path interface for local and cloud bucket storage\n\n\n\n\nPathy is a python package (with type annotations) for working with Cloud Bucket storage providers using a pathlib interface. It provides an easy-to-use API bundled with a CLI app for basic file operations between local files and remote buckets. It enables a smooth developer experience by letting developers work against the local file system during development and only switch over to live APIs for deployment. It also makes converting bucket blobs into local files a snap with optional local file caching.\n\ud83d\ude80 Quickstart\nYou can install pathy from pip:\npip install pathy\nThe package exports the Pathy class and utilities for configuring the bucket storage provider to use.\nfrom pathy import Pathy, use_fs\n# Use the local file-system for quicker development\nuse_fs()\n# Create a bucket\nPathy(\"gs://my_bucket\").mkdir(exist_ok=True)\n# An excellent blob\ngreeting = Pathy(f\"gs://my_bucket/greeting.txt\")\n# But it doesn't exist yet\nassert not greeting.exists()\n# Create it by writing some text\ngreeting.write_text(\"Hello World!\")\n# Now it exists\nassert greeting.exists()\n# Delete it\ngreeting.unlink()\n# Now it doesn't\nassert not greeting.exists()\nSupported Clouds\nThe table below details the supported cloud provider APIs.\n\n\n\nCloud Service\nSupport\nInstall Extras\n\n\n\n\nGoogle Cloud Storage\n\u2705\npip install pathy[gcs]\n\n\nAmazon S3\n\u2705\npip install pathy[s3]\n\n\nAzure\n\u2705\npip install pathy[azure]\n\n\n\nGoogle Cloud Storage\nGoogle recommends using a JSON credentials file, which you can specify by path:\nfrom google.oauth2 import service_account\nfrom pathy import set_client_params\n\ncredentials = service_account.Credentials.from_service_account_file(\"./my-creds.json\")\nset_client_params(\"gs\", credentials=credentials)\nAmazon S3\nS3 uses a JSON credentials file, which you can specify by path:\nfrom pathy import set_client_params\n\nset_client_params(\"s3\", key_id=\"YOUR_ACCESS_KEY_ID\", key_secret=\"YOUR_ACCESS_SECRET\")\nAzure\nAzure blob storage can be passed a connection_string:\nfrom pathy import set_client_params\n\nset_client_params(\"azure\", connection_string=\"YOUR_CONNECTION_STRING\")\nor a BlobServiceClient instance:\nfrom azure.storage.blob import BlobServiceClient\nfrom pathy import set_client_params\n\nservice: BlobServiceClient = BlobServiceClient.from_connection_string(\n    \"YOUR_CONNECTION_STRING\"\n)\nset_client_params(\"azure\", service=service)\nSemantic Versioning\nBefore Pathy reaches v1.0 the project is not guaranteed to have a consistent API, which means that types and classes may move around or be removed. That said, we try to be predictable when it comes to breaking changes, so the project uses semantic versioning to help users avoid breakage.\nSpecifically, new releases increase the patch semver component for new features and fixes, and the minor component when there are breaking changes. If you don't know much about semver strings, they're usually formatted {major}.{minor}.{patch} so increasing the patch component means incrementing the last number.\nConsider a few examples:\n\n\n\nFrom Version\nTo Version\nChanges are Breaking\n\n\n\n\n0.2.0\n0.2.1\nNo\n\n\n0.3.2\n0.3.6\nNo\n\n\n0.3.1\n0.3.17\nNo\n\n\n0.2.2\n0.3.0\nYes\n\n\n\nIf you are concerned about breaking changes, you can pin the version in your requirements so that it does not go beyond the current semver minor component, for example if the current version was 0.1.37:\npathy>=0.1.37,<0.2.0\n\n\ud83c\udf9b API\nPathy class\nPathy(self, args, kwargs)\nSubclass of pathlib.Path that works with bucket APIs.\nexists method\nPathy.exists(self) -> bool\nReturns True if the path points to an existing bucket, blob, or prefix.\nfluid classmethod\nPathy.fluid(\n    path_candidate: Union[str, Pathy, BasePath],\n) -> Union[Pathy, BasePath]\nInfer either a Pathy or pathlib.Path from an input path or string.\nThe returned type is a union of the potential FluidPath types and will\ntype-check correctly against the minimum overlapping APIs of all the input\ntypes.\nIf you need to use specific implementation details of a type, \"narrow\" the\nreturn of this function to the desired type, e.g.\nfrom pathy import FluidPath, Pathy\n\nfluid_path: FluidPath = Pathy.fluid(\"gs://my_bucket/foo.txt\")\n# Narrow the type to a specific class\nassert isinstance(fluid_path, Pathy), \"must be Pathy\"\n# Use a member specific to that class\nassert fluid_path.prefix == \"foo.txt/\"\nfrom_bucket classmethod\nPathy.from_bucket(bucket_name: str, scheme: str = 'gs') -> 'Pathy'\nInitialize a Pathy from a bucket name. This helper adds a trailing slash and\nthe appropriate prefix.\nfrom pathy import Pathy\n\nassert str(Pathy.from_bucket(\"one\")) == \"gs://one/\"\nassert str(Pathy.from_bucket(\"two\")) == \"gs://two/\"\nglob method\nPathy.glob(\n    self: 'Pathy',\n    pattern: str,\n) -> Generator[Pathy, NoneType, NoneType]\nPerform a glob match relative to this Pathy instance, yielding all matched\nblobs.\nis_dir method\nPathy.is_dir(self: 'Pathy') -> bool\nDetermine if the path points to a bucket or a prefix of a given blob\nin the bucket.\nReturns True if the path points to a bucket or a blob prefix.\nReturns False if it points to a blob or the path doesn't exist.\nis_file method\nPathy.is_file(self: 'Pathy') -> bool\nDetermine if the path points to a blob in the bucket.\nReturns True if the path points to a blob.\nReturns False if it points to a bucket or blob prefix, or if the path doesn\u2019t\nexist.\niterdir method\nPathy.iterdir(\n    self: 'Pathy',\n) -> Generator[Pathy, NoneType, NoneType]\nIterate over the blobs found in the given bucket or blob prefix path.\nls method\nPathy.ls(self: 'Pathy') -> Generator[BlobStat, NoneType, NoneType]\nList blob names with stat information under the given path.\nThis is considerably faster than using iterdir if you also need\nthe stat information for the enumerated blobs.\nYields BlobStat objects for each found blob.\nmkdir method\nPathy.mkdir(\n    self,\n    mode: int = 511,\n    parents: bool = False,\n    exist_ok: bool = False,\n) -> None\nCreate a bucket from the given path. Since bucket APIs only have implicit\nfolder structures (determined by the existence of a blob with an overlapping\nprefix) this does nothing other than create buckets.\nIf parents is False, the bucket will only be created if the path points to\nexactly the bucket and nothing else. If parents is true the bucket will be\ncreated even if the path points to a specific blob.\nThe mode param is ignored.\nRaises FileExistsError if exist_ok is false and the bucket already exists.\nopen method\nPathy.open(\n    self: 'Pathy',\n    mode: str = 'r',\n    buffering: int = 8192,\n    encoding: Optional[str] = None,\n    errors: Optional[str] = None,\n    newline: Optional[str] = None,\n) -> IO[Any]\nOpen the given blob for streaming. This delegates to the smart_open\nlibrary that handles large file streaming for a number of bucket API\nproviders.\nowner method\nPathy.owner(self: 'Pathy') -> Optional[str]\nReturns the name of the user that owns the bucket or blob\nthis path points to. Returns None if the owner is unknown or\nnot supported by the bucket API provider.\nrename method\nPathy.rename(self: 'Pathy', target: Union[str, pathlib.PurePath]) -> 'Pathy'\nRename this path to the given target.\nIf the target exists and is a file, it will be replaced silently if the user\nhas permission.\nIf path is a blob prefix, it will replace all the blobs with the same prefix\nto match the target prefix.\nreplace method\nPathy.replace(self: 'Pathy', target: Union[str, pathlib.PurePath]) -> 'Pathy'\nRenames this path to the given target.\nIf target points to an existing path, it will be replaced.\nresolve method\nPathy.resolve(self, strict: bool = False) -> 'Pathy'\nResolve the given path to remove any relative path specifiers.\nfrom pathy import Pathy\n\npath = Pathy(\"gs://my_bucket/folder/../blob\")\nassert path.resolve() == Pathy(\"gs://my_bucket/blob\")\nrglob method\nPathy.rglob(\n    self: 'Pathy',\n    pattern: str,\n) -> Generator[Pathy, NoneType, NoneType]\nPerform a recursive glob match relative to this Pathy instance, yielding\nall matched blobs. Imagine adding \"**/\" before a call to glob.\nrmdir method\nPathy.rmdir(self: 'Pathy') -> None\nRemoves this bucket or blob prefix. It must be empty.\nsamefile method\nPathy.samefile(\n    self: 'Pathy',\n    other_path: Union[str, bytes, int, pathlib.Path],\n) -> bool\nDetermine if this path points to the same location as other_path.\nstat method\nPathy.stat(self: 'Pathy') -> pathy.BlobStat\nReturns information about this bucket path.\nto_local classmethod\nPathy.to_local(\n    blob_path: Union[Pathy, str],\n    recurse: bool = True,\n) -> pathlib.Path\nDownload and cache either a blob or a set of blobs matching a prefix.\nThe cache is sensitive to the file updated time, and downloads new blobs\nas their updated timestamps change.\ntouch method\nPathy.touch(self: 'Pathy', mode: int = 438, exist_ok: bool = True) -> None\nCreate a blob at this path.\nIf the blob already exists, the function succeeds if exist_ok is true\n(and its modification time is updated to the current time), otherwise\nFileExistsError is raised.\nBlobStat dataclass\nBlobStat(\n    self,\n    name: str,\n    size: Optional[int],\n    last_modified: Optional[int],\n) -> None\nStat for a bucket item\nuse_fs function\nuse_fs(\n    root: Optional[str, pathlib.Path, bool] = None,\n) -> Optional[pathy.BucketClientFS]\nUse a path in the local file-system to store blobs and buckets.\nThis is useful for development and testing situations, and for embedded\napplications.\nget_fs_client function\nget_fs_client() -> Optional[pathy.BucketClientFS]\nGet the file-system client (or None)\nuse_fs_cache function\nuse_fs_cache(\n    root: Optional[str, pathlib.Path, bool] = None,\n) -> Optional[pathlib.Path]\nUse a path in the local file-system to cache blobs and buckets.\nThis is useful for when you want to avoid fetching large blobs multiple\ntimes, or need to pass a local file path to a third-party library.\nget_fs_cache function\nget_fs_cache() -> Optional[pathlib.Path]\nGet the folder that holds file-system cached blobs and timestamps.\nset_client_params function\nset_client_params(scheme: str, kwargs: Any) -> None\nSpecify args to pass when instantiating a service-specific Client\nobject. This allows for passing credentials in whatever way your underlying\nclient library prefers.\nCLI\nPathy command line interface. (v0.5.2)\nUsage:\n$ [OPTIONS] COMMAND [ARGS]...\nOptions:\n\n--install-completion: Install completion for the current shell.\n--show-completion: Show completion for the current shell, to copy it or customize the installation.\n--help: Show this message and exit.\n\nCommands:\n\ncp: Copy a blob or folder of blobs from one...\nls: List the blobs that exist at a given...\nmv: Move a blob or folder of blobs from one path...\nrm: Remove a blob or folder of blobs from a given...\n\ncp\nCopy a blob or folder of blobs from one bucket to another.\nUsage:\n$ cp [OPTIONS] FROM_LOCATION TO_LOCATION\nArguments:\n\nFROM_LOCATION: [required]\nTO_LOCATION: [required]\n\nOptions:\n\n--help: Show this message and exit.\n\nls\nList the blobs that exist at a given location.\nUsage:\n$ ls [OPTIONS] LOCATION\nArguments:\n\nLOCATION: [required]\n\nOptions:\n\n-l, --long: Print long style entries with updated time and size shown. [default: False]\n--help: Show this message and exit.\n\nmv\nMove a blob or folder of blobs from one path to another.\nUsage:\n$ mv [OPTIONS] FROM_LOCATION TO_LOCATION\nArguments:\n\nFROM_LOCATION: [required]\nTO_LOCATION: [required]\n\nOptions:\n\n--help: Show this message and exit.\n\nrm\nRemove a blob or folder of blobs from a given location.\nUsage:\n$ rm [OPTIONS] LOCATION\nArguments:\n\nLOCATION: [required]\n\nOptions:\n\n-r, --recursive: Recursively remove files and folders. [default: False]\n-v, --verbose: Print removed files and folders. [default: False]\n--help: Show this message and exit.\n\nCredits\nPathy is originally based on the S3Path project, which provides a Path interface for S3 buckets.\n\n\n", "description": "Pathlib-like API for cloud storage buckets"}, {"name": "parso", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nparso - A Python Parser\nResources\nInstallation\nFuture\nKnown Issues\nAcknowledgements\n\n\n\n\n\nREADME.rst\n\n\n\n\nparso - A Python Parser\n\n\n\n\nParso is a Python parser that supports error recovery and round-trip parsing\nfor different Python versions (in multiple Python versions). Parso is also able\nto list multiple syntax errors in your python file.\nParso has been battle-tested by jedi. It was pulled out of jedi to be useful\nfor other projects as well.\nParso consists of a small API to parse Python and analyse the syntax tree.\nA simple example:\n>>> import parso\n>>> module = parso.parse('hello + 1', version=\"3.9\")\n>>> expr = module.children[0]\n>>> expr\nPythonNode(arith_expr, [<Name: hello@1,0>, <Operator: +>, <Number: 1>])\n>>> print(expr.get_code())\nhello + 1\n>>> name = expr.children[0]\n>>> name\n<Name: hello@1,0>\n>>> name.end_pos\n(1, 5)\n>>> expr.end_pos\n(1, 9)\nTo list multiple issues:\n>>> grammar = parso.load_grammar()\n>>> module = grammar.parse('foo +\\nbar\\ncontinue')\n>>> error1, error2 = grammar.iter_errors(module)\n>>> error1.message\n'SyntaxError: invalid syntax'\n>>> error2.message\n\"SyntaxError: 'continue' not properly in loop\"\n\nResources\n\nTesting\nPyPI\nDocs\nUses semantic versioning\n\n\nInstallation\n\npip install parso\n\nFuture\n\nThere will be better support for refactoring and comments. Stay tuned.\nThere's a WIP PEP8 validator. It's however not in a good shape, yet.\n\n\nKnown Issues\n\nasync/await are already used as keywords in Python3.6.\nfrom __future__ import print_function is not ignored.\n\n\nAcknowledgements\n\nGuido van Rossum (@gvanrossum) for creating the parser generator pgen2\n(originally used in lib2to3).\nSalome Schneider\nfor the extremely awesome parso logo.\n\n\n\n", "description": "Python parser."}, {"name": "paramiko", "readme": "\n    \n\nWelcome to Paramiko!\nParamiko is a pure-Python [1] (3.6+) implementation of the SSHv2 protocol\n[2], providing both client and server functionality. It provides the\nfoundation for the high-level SSH library Fabric,\nwhich is what we recommend you use for common client use-cases such as running\nremote shell commands or transferring files.\nDirect use of Paramiko itself is only intended for users who need\nadvanced/low-level primitives or want to run an in-Python sshd.\nFor installation information, changelogs, FAQs and similar, please visit our\nmain project website; for API details, see the\nversioned docs. Additionally, the project\nmaintainer keeps a roadmap on his\npersonal site.\n\n\n[1]\nParamiko relies on cryptography for crypto\nfunctionality, which makes use of C and Rust extensions but has many\nprecompiled options available. See our installation page for details.\n\n\n[2]\nOpenSSH\u2019s RFC specification page is a fantastic resource and collection of\nlinks that we won\u2019t bother replicating here:\nhttps://www.openssh.com/specs.html\nOpenSSH itself also happens to be our primary reference implementation:\nwhen in doubt, we consult how they do things, unless there are good reasons\nnot to. There are always some gaps, but we do our best to reconcile them\nwhen possible.\n\n\n\n", "description": "SSH2 protocol library"}, {"name": "pandocfilters", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npandocfilters\nWhat are pandoc filters?\nCompatibility\nInstalling\nAvailable functions\nHow to use\nExamples\nAPI documentation\n\n\n\n\n\nREADME.rst\n\n\n\n\npandocfilters\nA python module for writing pandoc filters\n\nWhat are pandoc filters?\nPandoc filters\nare pipes that read a JSON serialization of the Pandoc AST\nfrom stdin, transform it in some way, and write it to stdout.\nThey can be used with pandoc (>= 1.12) either using pipes\npandoc -t json -s | ./caps.py | pandoc -f json\n\nor using the --filter (or -F) command-line option.\npandoc --filter ./caps.py -s\n\nFor more on pandoc filters, see the pandoc documentation under --filter\nand the tutorial on writing filters.\nFor an alternative library for writing pandoc filters, with\na more \"Pythonic\" design, see panflute.\n\nCompatibility\nPandoc 1.16 introduced link and image attributes to the existing\ncaption and target arguments, requiring a change in pandocfilters\nthat breaks backwards compatibility. Consequently, you should use:\n\npandocfilters version <= 1.2.4 for pandoc versions 1.12--1.15, and\npandocfilters version >= 1.3.0 for pandoc versions >= 1.16.\n\nPandoc 1.17.3 (pandoc-types 1.17.*) introduced a new JSON format.\npandocfilters 1.4.0 should work with both the old and the new\nformat.\n\nInstalling\nRun this inside the present directory:\npython setup.py install\n\nOr install from PyPI:\npip install pandocfilters\n\n\nAvailable functions\nThe main functions pandocfilters exports are\n\nwalk(x, action, format, meta)\nWalk a tree, applying an action to every object. Returns a modified\ntree. An action is a function of the form\naction(key, value, format, meta), where:\n\nkey is the type of the pandoc object (e.g. 'Str', 'Para')\nvalue is the contents of the object (e.g. a string for 'Str', a list of\ninline elements for 'Para')\nformat is the target output format (as supplied by the\nformat argument of walk)\nmeta is the document's metadata\n\nThe return of an action is either:\n\nNone: this means that the object should remain unchanged\na pandoc object: this will replace the original object\na list of pandoc objects: these will replace the original object;\nthe list is merged with the neighbors of the original objects\n(spliced into the list the original object belongs to); returning\nan empty list deletes the object\n\n\ntoJSONFilter(action)\nLike toJSONFilters, but takes a single action as argument.\n\ntoJSONFilters(actions)\nGenerate a JSON-to-JSON filter from stdin to stdout\nThe filter:\n\nreads a JSON-formatted pandoc document from stdin\ntransforms it by walking the tree and performing the actions\nreturns a new JSON-formatted pandoc document to stdout\n\nThe argument actions is a list of functions of the form\naction(key, value, format, meta), as described in more detail\nunder walk.\nThis function calls applyJSONFilters, with the format\nargument provided by the first command-line argument, if present.\n(Pandoc sets this by default when calling filters.)\n\napplyJSONFilters(actions, source, format=\"\")\nWalk through JSON structure and apply filters\nThis:\n\nreads a JSON-formatted pandoc document from a source string\ntransforms it by walking the tree and performing the actions\nreturns a new JSON-formatted pandoc document as a string\n\nThe actions argument is a list of functions (see walk for a\nfull description).\nThe argument source is a string encoded JSON object.\nThe argument format is a string describing the output format.\nReturns a new JSON-formatted pandoc document.\n\nstringify(x)\nWalks the tree x and returns concatenated string content, leaving out\nall formatting.\n\nattributes(attrs)\nReturns an attribute list, constructed from the dictionary attrs.\n\n\n\nHow to use\nMost users will only need toJSONFilter.  Here is a simple example\nof its use:\n#!/usr/bin/env python\n\n\"\"\"\nPandoc filter to convert all regular text to uppercase.\nCode, link URLs, etc. are not affected.\n\"\"\"\n\nfrom pandocfilters import toJSONFilter, Str\n\ndef caps(key, value, format, meta):\n  if key == 'Str':\n    return Str(value.upper())\n\nif __name__ == \"__main__\":\n  toJSONFilter(caps)\n\n\nExamples\nThe examples subdirectory in the source repository contains the\nfollowing filters. These filters should provide a useful starting point\nfor developing your own pandocfilters.\n\nabc.py\nPandoc filter to process code blocks with class abc containing ABC\nnotation into images. Assumes that abcm2ps and ImageMagick's convert\nare in the path. Images are put in the abc-images directory.\ncaps.py\nPandoc filter to convert all regular text to uppercase. Code, link\nURLs, etc. are not affected.\nblockdiag.py\nPandoc filter to process code blocks with class \"blockdiag\" into\ngenerated images. Needs utils from http://blockdiag.com.\ncomments.py\nPandoc filter that causes everything between\n<!-- BEGIN COMMENT --> and <!-- END COMMENT --> to be ignored.\nThe comment lines must appear on lines by themselves, with blank\nlines surrounding\ndeemph.py\nPandoc filter that causes emphasized text to be displayed in ALL\nCAPS.\ndeflists.py\nPandoc filter to convert definition lists to bullet lists with the\ndefined terms in strong emphasis (for compatibility with standard\nmarkdown).\ngabc.py\nPandoc filter to convert code blocks with class \"gabc\" to LaTeX\n\\gabcsnippet commands in LaTeX output, and to images in HTML output.\ngraphviz.py\nPandoc filter to process code blocks with class graphviz into\ngraphviz-generated images.\nlilypond.py\nPandoc filter to process code blocks with class \"ly\" containing\nLilypond notation.\nmetavars.py\nPandoc filter to allow interpolation of metadata fields into a\ndocument. %{fields} will be replaced by the field's value, assuming\nit is of the type MetaInlines or MetaString.\nmyemph.py\nPandoc filter that causes emphasis to be rendered using the custom\nmacro \\myemph{...} rather than \\emph{...} in latex. Other output\nformats are unaffected.\nplantuml.py\nPandoc filter to process code blocks with class plantuml to images.\nNeeds plantuml.jar from http://plantuml.com/.\nditaa.py\nPandoc filter to process code blocks with class ditaa to images.\nNeeds ditaa.jar from http://ditaa.sourceforge.net/.\ntheorem.py\nPandoc filter to convert divs with class=\"theorem\" to LaTeX theorem\nenvironments in LaTeX output, and to numbered theorems in HTML\noutput.\ntikz.py\nPandoc filter to process raw latex tikz environments into images.\nAssumes that pdflatex is in the path, and that the standalone\npackage is available. Also assumes that ImageMagick's convert is in\nthe path. Images are put in the tikz-images directory.\n\n\nAPI documentation\nBy default most filters use get_filename4code to\ncreate a directory ...-images to save temporary\nfiles. This directory doesn't get removed as it can be used as a cache so that\nlater pandoc runs don't have to recreate files if they already exist. The\ndirectory is generated in the current directory.\nIf you prefer to have a clean directory after running pandoc filters, you\ncan set an environment variable PANDOCFILTER_CLEANUP to any non-empty value such as 1\nwhich forces the code to create a temporary directory that will be removed\nby the end of execution.\n\n\n", "description": "Library for writing pandoc filters in Python"}, {"name": "pandas", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npandas: powerful Python data analysis toolkit\nWhat is it?\nTable of Contents\nMain Features\nWhere to get it\nDependencies\nInstallation from sources\nLicense\nDocumentation\nBackground\nGetting Help\nDiscussion and Development\nContributing to pandas\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\n\n\npandas: powerful Python data analysis toolkit\n\n\n\n\n\n\n\n\n\nTesting\n \n\n\nPackage\n   \n\n\nMeta\n   \n\n\n\nWhat is it?\npandas is a Python package that provides fast, flexible, and expressive data\nstructures designed to make working with \"relational\" or \"labeled\" data both\neasy and intuitive. It aims to be the fundamental high-level building block for\ndoing practical, real world data analysis in Python. Additionally, it has\nthe broader goal of becoming the most powerful and flexible open source data\nanalysis / manipulation tool available in any language. It is already well on\nits way towards this goal.\nTable of Contents\n\nMain Features\nWhere to get it\nDependencies\nInstallation from sources\nLicense\nDocumentation\nBackground\nGetting Help\nDiscussion and Development\nContributing to pandas\n\nMain Features\nHere are just a few of the things that pandas does well:\n\nEasy handling of missing data (represented as\nNaN, NA, or NaT) in floating point as well as non-floating point data\nSize mutability: columns can be inserted and\ndeleted from DataFrame and higher dimensional\nobjects\nAutomatic and explicit data alignment: objects can\nbe explicitly aligned to a set of labels, or the user can simply\nignore the labels and let Series, DataFrame, etc. automatically\nalign the data for you in computations\nPowerful, flexible group by functionality to perform\nsplit-apply-combine operations on data sets, for both aggregating\nand transforming data\nMake it easy to convert ragged,\ndifferently-indexed data in other Python and NumPy data structures\ninto DataFrame objects\nIntelligent label-based slicing, fancy\nindexing, and subsetting of\nlarge data sets\nIntuitive merging and joining data\nsets\nFlexible reshaping and pivoting of\ndata sets\nHierarchical labeling of axes (possible to have multiple\nlabels per tick)\nRobust IO tools for loading data from flat files\n(CSV and delimited), Excel files, databases,\nand saving/loading data from the ultrafast HDF5 format\nTime series-specific functionality: date range\ngeneration and frequency conversion, moving window statistics,\ndate shifting and lagging\n\nWhere to get it\nThe source code is currently hosted on GitHub at:\nhttps://github.com/pandas-dev/pandas\nBinary installers for the latest released version are available at the Python\nPackage Index (PyPI) and on Conda.\n# conda\nconda install -c conda-forge pandas\n# or PyPI\npip install pandas\nThe list of changes to pandas between each release can be found\nhere. For full\ndetails, see the commit logs at https://github.com/pandas-dev/pandas.\nDependencies\n\nNumPy - Adds support for large, multi-dimensional arrays, matrices and high-level mathematical functions to operate on these arrays\npython-dateutil - Provides powerful extensions to the standard datetime module\npytz - Brings the Olson tz database into Python which allows accurate and cross platform timezone calculations\n\nSee the full installation instructions for minimum supported versions of required, recommended and optional dependencies.\nInstallation from sources\nTo install pandas from source you need Cython in addition to the normal\ndependencies above. Cython can be installed from PyPI:\npip install cython\nIn the pandas directory (same one where you found this file after\ncloning the git repo), execute:\npip install .\nor for installing in development mode:\npython -m pip install -ve . --no-build-isolation --config-settings=editable-verbose=true\nSee the full instructions for installing from source.\nLicense\nBSD 3\nDocumentation\nThe official documentation is hosted on PyData.org.\nBackground\nWork on pandas started at AQR (a quantitative hedge fund) in 2008 and\nhas been under active development since then.\nGetting Help\nFor usage questions, the best place to go to is StackOverflow.\nFurther, general questions and discussions can also take place on the pydata mailing list.\nDiscussion and Development\nMost development discussions take place on GitHub in this repo, via the GitHub issue tracker.\nFurther, the pandas-dev mailing list can also be used for specialized discussions or design issues, and a Slack channel is available for quick development related questions.\nThere are also frequent community meetings for project maintainers open to the community as well as monthly new contributor meetings to help support new contributors.\nAdditional information on the communication channels can be found on the contributor community page.\nContributing to pandas\n\nAll contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.\nA detailed overview on how to contribute can be found in the contributing guide.\nIf you are simply looking to start working with the pandas codebase, navigate to the GitHub \"issues\" tab and start looking through interesting issues. There are a number of issues listed under Docs and good first issue where you could start out.\nYou can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to subscribe to pandas on CodeTriage.\nOr maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking \u2018this can be improved\u2019...you can do something about it!\nFeel free to ask questions on the mailing list or on Slack.\nAs contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: Contributor Code of Conduct\n\nGo to Top\n\n\n", "description": "Data analysis and manipulation library for Python.", "category": "Data analysis/science"}, {"name": "packaging", "readme": "\n\n\n\n\n\n\n\n\n\n\n\npackaging\nDocumentation\nInstallation\nDiscussion\nCode of Conduct\nContributing\nProject History\n\n\n\n\n\nREADME.rst\n\n\n\n\npackaging\nReusable core utilities for various Python Packaging\ninteroperability specifications.\nThis library provides utilities that implement the interoperability\nspecifications which have clearly one correct behaviour (eg: PEP 440)\nor benefit greatly from having a single shared implementation (eg: PEP 425).\nThe packaging project includes the following: version handling, specifiers,\nmarkers, requirements, tags, utilities.\n\nDocumentation\nThe documentation provides information and the API for the following:\n\nVersion Handling\nSpecifiers\nMarkers\nRequirements\nTags\nUtilities\n\n\nInstallation\nUse pip to install these utilities:\npip install packaging\n\nThe packaging library uses calendar-based versioning (YY.N).\n\nDiscussion\nIf you run into bugs, you can file them in our issue tracker.\nYou can also join #pypa on Freenode to ask questions or get involved.\n\nCode of Conduct\nEveryone interacting in the packaging project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\nContributing\nThe CONTRIBUTING.rst file outlines how to contribute to this project as\nwell as how to report a potential security issue. The documentation for this\nproject also covers information about project development and security.\n\nProject History\nPlease review the CHANGELOG.rst file or the Changelog documentation for\nrecent changes and project history.\n\n\n", "description": "Core utilities for Python packaging."}, {"name": "oscrypto", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\noscrypto\nSupported Operating Systems\nFeatures\nWhy Another Python Crypto Library?\nRelated Crypto Libraries\nCurrent Release\nDependencies\nInstallation\nLicense\nDocumentation\nContinuous Integration\nTesting\nGit Repository\nBackend Options\nInternet Tests\nPyPi Source Distribution\nTest Options\nForce OpenSSL Shared Library Paths\nForce Use of ctypes\nForce Use of Legacy Windows Crypto APIs\nSkip Tests Requiring an Internet Connection\nPackage\nDevelopment\nCI Tasks\n\n\n\n\n\nreadme.md\n\n\n\n\noscrypto\nA compilation-free, always up-to-date encryption library for Python that works\non Windows, OS X, Linux and BSD. Supports the following versions of Python:\n2.6, 2.7, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 3.10 and pypy.\n\nSupported Operating Systems\nFeatures\nWhy Another Python Crypto Library?\nRelated Crypto Libraries\nCurrent Release\nDependencies\nInstallation\nLicense\nDocumentation\nContinuous Integration\nTesting\nDevelopment\nCI Tasks\n\n\n\n\nSupported Operating Systems\nThe library integrates with the encryption library that is part of the operating\nsystem. This means that a compiler is never needed, and OS security updates take\ncare of patching vulnerabilities. Supported operating systems include:\n\nWindows XP or newer\n\nUses:\n\nCryptography API: Next Generation (CNG)\nSecure Channel for TLS\nCryptoAPI for trust lists and XP support\n\n\nTested on:\n\nWindows XP (no SNI)\nWindows 7\nWindows 8.1\nWindows Server 2012\nWindows 10\n\n\n\n\nOS X 10.7 or newer\n\nUses:\n\nSecurity.framework\nSecure Transport for TLS\nCommonCrypto for PBKDF2\nOpenSSL (or LibreSSL on macOS 10.13) for the PKCS #12 KDF\n\n\nTested on:\n\nOS X 10.7\nOS X 10.8\nOS X 10.9\nOS X 10.10\nOS X 10.11\nOS X 10.11 with OpenSSL 1.1.0\nmacOS 10.12\nmacOS 10.13 with LibreSSL 2.2.7\nmacOS 10.14\nmacOS 10.15\nmacOS 10.15 with OpenSSL 3.0\nmacOS 11\nmacOS 12\n\n\n\n\nLinux or BSD\n\nUses one of:\n\nOpenSSL 0.9.8\nOpenSSL 1.0.x\nOpenSSL 1.1.0\nOpenSSL 3.0\nLibreSSL\n\n\nTested on:\n\nArch Linux with OpenSSL 1.0.2\nOpenBSD 5.7 with LibreSSL\nUbuntu 10.04 with OpenSSL 0.9.8\nUbuntu 12.04 with OpenSSL 1.0.1\nUbuntu 15.04 with OpenSSL 1.0.1\nUbuntu 16.04 with OpenSSL 1.0.2 on Raspberry Pi 3 (armhf)\nUbuntu 18.04 with OpenSSL 1.1.x (amd64, arm64, ppc64el)\nUbuntu 22.04 with OpenSSL 3.0 (amd64)\n\n\n\n\n\nOS X 10.6 will not be supported due to a lack of available\ncryptographic primitives and due to lack of vendor support.\nFeatures\nCurrently the following features are implemented. Many of these should only be\nused for integration with existing/legacy systems. If you don't know which you\nshould, or should not use, please see Learning.\n\nTLSv1.x socket wrappers\n\nCertificate verification performed by OS trust roots\nCustom CA certificate support\nSNI support (except Windows XP)\nSession reuse via IDs/tickets\nModern cipher suites (RC4, DES, anon and NULL ciphers disabled)\nWeak DH parameters and certificate signatures rejected\nSSLv3 disabled by default, SSLv2 unimplemented\nCRL/OCSP revocation checks consistenty disabled\n\n\nExporting OS trust roots\n\nPEM-formatted CA certs from the OS for OpenSSL-based code\n\n\nEncryption/decryption\n\nAES (128, 192, 256), CBC mode, PKCS7 padding\nAES (128, 192, 256), CBC mode, no padding\nTripleDES 3-key, CBC mode, PKCS5 padding\nTripleDes 2-key, CBC mode, PKCS5 padding\nDES, CBC mode, PKCS5 padding\nRC2 (40-128), CBC mode, PKCS5 padding\nRC4 (40-128)\nRSA PKCSv1.5\nRSA OAEP (SHA1 only)\n\n\nGenerating public/private key pairs\n\nRSA (1024, 2048, 3072, 4096 bit)\nDSA (1024 bit on all platforms - 2048, 3072 bit with OpenSSL 1.x or\nWindows 8)\nEC (secp256r1, secp384r1, secp521r1 curves)\n\n\nGenerating DH parameters\nSigning and verification\n\nRSA PKCSv1.5\nRSA PSS\nDSA\nEC\n\n\nLoading and normalizing DER and PEM formatted keys\n\nRSA public and private keys\nDSA public and private keys\nEC public and private keys\nX.509 Certificates\nPKCS#12 archives (.pfx/.p12)\n\n\nKey derivation\n\nPBKDF2\nPBKDF1\nPKCS#12 KDF\n\n\nRandom byte generation\n\nThe feature set was largely driven by the technologies used related to\ngenerating and validating X.509 certificates. The various CBC encryption schemes\nand KDFs are used to load encrypted private keys, and the various RSA padding\nschemes are part of X.509 signatures.\nFor modern cryptography not tied to an existing system, please see the\nModern Cryptography section of the docs.\nPlease note that this library does not include modern block modes such as CTR\nand GCM due to lack of support from both OS X and OpenSSL 0.9.8.\nWhy Another Python Crypto Library?\nIn short, the existing cryptography libraries for Python didn't fit the needs of\na couple of projects I was working on. Primarily these are applications\ndistributed to end-users who aren't programmers, that need to handle TLS and\nvarious technologies related to X.509 certificates.\nIf your system is not tied to AES, TLS, X.509, or related technologies, you\nprobably want more modern cryptography.\nDepending on your needs, the cryptography package may\nbe a good (or better) fit.\nSome things that make oscrypto unique:\n\nNo compiler needed, ever. No need to pre-compile shared libraries. Just\ndistribute the Python source files, any way you want.\nUses the operating system's crypto library - does not require OpenSSL on\nWindows or OS X.\nRelies on the operating system for security patching. You don't need to\nrebuild all of your apps every time there is a new TLS vulnerability.\nIntentionally limited in scope to crypto primitives. Other libraries\nbuilt upon it deal with certificate path validation, creating certificates\nand CSRs, constructing CMS structures.\nBuilt on top of a fast, pure-Python ASN.1 parser,\nasn1crypto.\nTLS functionality uses the operating system's trust list/CA certs and is\npre-configured with sane defaults\nPublic APIs are simple and use strict type checks to avoid errors\n\nSome downsides include:\n\nDoes not currently implement:\n\nstandalone DH key exchange\nvarious encryption modes such as GCM, CCM, CTR, CFB, OFB, ECB\nkey wrapping\nCMAC\nHKDF\n\n\nNon-TLS functionality is architected for dealing with data that fits in\nmemory and is available all at once\nDeveloped by a single developer\n\nRelated Crypto Libraries\noscrypto is part of the modularcrypto family of Python packages:\n\nasn1crypto\noscrypto\ncsrbuilder\ncertbuilder\ncrlbuilder\nocspbuilder\ncertvalidator\n\nCurrent Release\n1.3.0 - changelog\nDependencies\n\nasn1crypto\nPython 2.6, 2.7, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 3.10, 3.11 or pypy\nOpenSSL/LibreSSL if on Linux\u00b9\n\n\u00b9 On Linux, ctypes.util.find_library() is used to located OpenSSL. Alpine Linux does not have an appropriate install by default for find_library() to work properly. Instead, oscrypto.use_openssl() must be called with the path to the OpenSSL shared libraries.\nInstallation\npip install oscrypto\nLicense\noscrypto is licensed under the terms of the MIT license. See the\nLICENSE file for the exact license text.\nDocumentation\noscrypto documentation\nContinuous Integration\nVarious combinations of platforms and versions of Python are tested via:\n\nmacOS, Linux, Windows via GitHub Actions\narm64 via CircleCI\n\nTesting\nTests are written using unittest and require no third-party packages.\nDepending on what type of source is available for the package, the following\ncommands can be used to run the test suite.\nGit Repository\nWhen working within a Git working copy, or an archive of the Git repository,\nthe full test suite is run via:\npython run.py tests\nTo run only some tests, pass a regular expression as a parameter to tests.\npython run.py tests aes\nTo run tests multiple times, in order to catch edge-case bugs, pass an integer\nto tests. If combined with a regular expression for filtering, pass the\nrepeat count after the regular expression.\npython run.py tests 20\npython run.py tests aes 20\nBackend Options\nTo run tests using a custom build of OpenSSL, or to use OpenSSL on Windows or\nMac, add use_openssl after run.py, like:\npython run.py use_openssl=/path/to/libcrypto.so,/path/to/libssl.so tests\nTo run tests forcing the use of ctypes, even if cffi is installed, add\nuse_ctypes after run.py:\npython run.py use_ctypes=true tests\nTo run tests using the legacy Windows crypto functions on Windows 7+, add\nuse_winlegacy after run.py:\npython run.py use_winlegacy=true tests\nInternet Tests\nTo skip tests that require an internet connection, add skip_internet after\nrun.py:\npython run.py skip_internet=true tests\nPyPi Source Distribution\nWhen working within an extracted source distribution (aka .tar.gz) from\nPyPi, the full test suite is run via:\npython setup.py test\nTest Options\nThe following env vars can control aspects of running tests:\nForce OpenSSL Shared Library Paths\nSetting the env var OSCRYPTO_USE_OPENSSL to a string in the form:\n/path/to/libcrypto.so,/path/to/libssl.so\n\nwill force use of specific OpenSSL shared libraries.\nThis also works on Mac and Windows to force use of OpenSSL instead of using\nnative crypto libraries.\nForce Use of ctypes\nBy default, oscrypto will use the cffi module for FFI if it is installed.\nTo use the slightly slower, but more widely-tested, ctypes FFI layer, set\nthe env var OSCRYPTO_USE_CTYPES=true.\nForce Use of Legacy Windows Crypto APIs\nOn Windows 7 and newer, oscrypto will use the CNG backend by default.\nTo force use of the older CryptoAPI, set the env var\nOSCRYPTO_USE_WINLEGACY=true.\nSkip Tests Requiring an Internet Connection\nSome of the TLS tests require an active internet connection to ensure that\nvarious \"bad\" server certificates are rejected.\nTo skip tests requiring an internet connection, set the env var\nOSCRYPTO_SKIP_INTERNET_TESTS=true.\nPackage\nWhen the package has been installed via pip (or another method), the package\noscrypto_tests may be installed and invoked to run the full test suite:\npip install oscrypto_tests\npython -m oscrypto_tests\nDevelopment\nTo install the package used for linting, execute:\npip install --user -r requires/lint\nThe following command will run the linter:\npython run.py lint\nSupport for code coverage can be installed via:\npip install --user -r requires/coverage\nCoverage is measured by running:\npython run.py coverage\nTo install the packages requires to generate the API documentation, run:\npip install --user -r requires/api_docs\nThe documentation can then be generated by running:\npython run.py api_docs\nTo install the necessary packages for releasing a new version on PyPI, run:\npip install --user -r requires/release\nReleases are created by:\n\n\nMaking a git tag in semver format\n\n\nRunning the command:\npython run.py release\n\n\nExisting releases can be found at https://pypi.python.org/pypi/oscrypto.\nCI Tasks\nA task named deps exists to download and stage all necessary testing\ndependencies. On posix platforms, curl is used for downloads and on Windows\nPowerShell with Net.WebClient is used. This configuration sidesteps issues\nrelated to getting pip to work properly and messing with site-packages for\nthe version of Python being used.\nThe ci task runs lint (if flake8 is available for the version of Python) and\ncoverage (or tests if coverage is not available for the version of Python).\nIf the current directory is a clean git working copy, the coverage data is\nsubmitted to codecov.io.\npython run.py deps\npython run.py ci\n\n\n"}, {"name": "orjson", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\norjson\nUsage\nInstall\nQuickstart\nMigrating\nSerialize\ndefault\noption\nOPT_APPEND_NEWLINE\nOPT_INDENT_2\nOPT_NAIVE_UTC\nOPT_NON_STR_KEYS\nOPT_OMIT_MICROSECONDS\nOPT_PASSTHROUGH_DATACLASS\nOPT_PASSTHROUGH_DATETIME\nOPT_PASSTHROUGH_SUBCLASS\nOPT_SERIALIZE_DATACLASS\nOPT_SERIALIZE_NUMPY\nOPT_SERIALIZE_UUID\nOPT_SORT_KEYS\nOPT_STRICT_INTEGER\nOPT_UTC_Z\nFragment\nDeserialize\nTypes\ndataclass\ndatetime\nenum\nfloat\nint\nnumpy\nstr\nuuid\nTesting\nPerformance\nLatency\ntwitter.json serialization\ntwitter.json deserialization\ngithub.json serialization\ngithub.json deserialization\ncitm_catalog.json serialization\ncitm_catalog.json deserialization\ncanada.json serialization\ncanada.json deserialization\nMemory\ntwitter.json\ngithub.json\ncitm_catalog.json\ncanada.json\nReproducing\nQuestions\nWhy can't I install it from PyPI?\n\"Cargo, the Rust package manager, is not installed or is not on PATH.\"\nWill it deserialize to dataclasses, UUIDs, decimals, etc or support object_hook?\nWill it serialize to str?\nWill it support PyPy?\nPackaging\nLicense\n\n\n\n\n\nREADME.md\n\n\n\n\norjson\norjson is a fast, correct JSON library for Python. It\nbenchmarks as the fastest Python\nlibrary for JSON and is more correct than the standard json library or other\nthird-party libraries. It serializes\ndataclass,\ndatetime,\nnumpy, and\nUUID instances natively.\nIts features and drawbacks compared to other Python JSON libraries:\n\nserializes dataclass instances 40-50x as fast as other libraries\nserializes datetime, date, and time instances to RFC 3339 format,\ne.g., \"1970-01-01T00:00:00+00:00\"\nserializes numpy.ndarray instances 4-12x as fast with 0.3x the memory\nusage of other libraries\npretty prints 10x to 20x as fast as the standard library\nserializes to bytes rather than str, i.e., is not a drop-in replacement\nserializes str without escaping unicode to ASCII, e.g., \"\u597d\" rather than\n\"\\\\u597d\"\nserializes float 10x as fast and deserializes twice as fast as other\nlibraries\nserializes subclasses of str, int, list, and dict natively,\nrequiring default to specify how to serialize others\nserializes arbitrary types using a default hook\nhas strict UTF-8 conformance, more correct than the standard library\nhas strict JSON conformance in not supporting Nan/Infinity/-Infinity\nhas an option for strict JSON conformance on 53-bit integers with default\nsupport for 64-bit\ndoes not provide load() or dump() functions for reading from/writing to\nfile-like objects\n\norjson supports CPython 3.7, 3.8, 3.9, 3.10, 3.11, and 3.12. It distributes\namd64/x86_64, aarch64/armv8, arm7, POWER/ppc64le, and s390x wheels for Linux,\namd64 and aarch64 wheels for macOS, and amd64 and i686/x86 wheels for Windows.\norjson  does not support PyPy. Releases follow semantic versioning and\nserializing a new object type without an opt-in flag is considered a\nbreaking change.\norjson is licensed under both the Apache 2.0 and MIT licenses. The\nrepository and issue tracker is\ngithub.com/ijl/orjson, and patches may be\nsubmitted there. There is a\nCHANGELOG\navailable in the repository.\n\nUsage\n\nInstall\nQuickstart\nMigrating\nSerialize\n\ndefault\noption\nFragment\n\n\nDeserialize\n\n\nTypes\n\ndataclass\ndatetime\nenum\nfloat\nint\nnumpy\nstr\nuuid\n\n\nTesting\nPerformance\n\nLatency\nMemory\nReproducing\n\n\nQuestions\nPackaging\nLicense\n\nUsage\nInstall\nTo install a wheel from PyPI:\npip install --upgrade \"pip>=20.3\" # manylinux_x_y, universal2 wheel support\npip install --upgrade orjson\nTo build a wheel, see packaging.\nQuickstart\nThis is an example of serializing, with options specified, and deserializing:\n>>> import orjson, datetime, numpy\n>>> data = {\n    \"type\": \"job\",\n    \"created_at\": datetime.datetime(1970, 1, 1),\n    \"status\": \"\ud83c\udd97\",\n    \"payload\": numpy.array([[1, 2], [3, 4]]),\n}\n>>> orjson.dumps(data, option=orjson.OPT_NAIVE_UTC | orjson.OPT_SERIALIZE_NUMPY)\nb'{\"type\":\"job\",\"created_at\":\"1970-01-01T00:00:00+00:00\",\"status\":\"\\xf0\\x9f\\x86\\x97\",\"payload\":[[1,2],[3,4]]}'\n>>> orjson.loads(_)\n{'type': 'job', 'created_at': '1970-01-01T00:00:00+00:00', 'status': '\ud83c\udd97', 'payload': [[1, 2], [3, 4]]}\nMigrating\norjson version 3 serializes more types than version 2. Subclasses of str,\nint, dict, and list are now serialized. This is faster and more similar\nto the standard library. It can be disabled with\norjson.OPT_PASSTHROUGH_SUBCLASS.dataclasses.dataclass instances\nare now serialized by default and cannot be customized in a\ndefault function unless option=orjson.OPT_PASSTHROUGH_DATACLASS is\nspecified. uuid.UUID instances are serialized by default.\nFor any type that is now serialized,\nimplementations in a default function and options enabling them can be\nremoved but do not need to be. There was no change in deserialization.\nTo migrate from the standard library, the largest difference is that\norjson.dumps returns bytes and json.dumps returns a str. Users with\ndict objects using non-str keys should specify\noption=orjson.OPT_NON_STR_KEYS. sort_keys is replaced by\noption=orjson.OPT_SORT_KEYS. indent is replaced by\noption=orjson.OPT_INDENT_2 and other levels of indentation are not\nsupported.\nSerialize\ndef dumps(\n    __obj: Any,\n    default: Optional[Callable[[Any], Any]] = ...,\n    option: Optional[int] = ...,\n) -> bytes: ...\ndumps() serializes Python objects to JSON.\nIt natively serializes\nstr, dict, list, tuple, int, float, bool, None,\ndataclasses.dataclass, typing.TypedDict, datetime.datetime,\ndatetime.date, datetime.time, uuid.UUID, numpy.ndarray, and\norjson.Fragment instances. It supports arbitrary types through default. It\nserializes subclasses of str, int, dict, list,\ndataclasses.dataclass, and enum.Enum. It does not serialize subclasses\nof tuple to avoid serializing namedtuple objects as arrays. To avoid\nserializing subclasses, specify the option orjson.OPT_PASSTHROUGH_SUBCLASS.\nThe output is a bytes object containing UTF-8.\nThe global interpreter lock (GIL) is held for the duration of the call.\nIt raises JSONEncodeError on an unsupported type. This exception message\ndescribes the invalid object with the error message\nType is not JSON serializable: .... To fix this, specify\ndefault.\nIt raises JSONEncodeError on a str that contains invalid UTF-8.\nIt raises JSONEncodeError on an integer that exceeds 64 bits by default or,\nwith OPT_STRICT_INTEGER, 53 bits.\nIt raises JSONEncodeError if a dict has a key of a type other than str,\nunless OPT_NON_STR_KEYS is specified.\nIt raises JSONEncodeError if the output of default recurses to handling by\ndefault more than 254 levels deep.\nIt raises JSONEncodeError on circular references.\nIt raises JSONEncodeError  if a tzinfo on a datetime object is\nunsupported.\nJSONEncodeError is a subclass of TypeError. This is for compatibility\nwith the standard library.\nIf the failure was caused by an exception in default then\nJSONEncodeError chains the original exception as __cause__.\ndefault\nTo serialize a subclass or arbitrary types, specify default as a\ncallable that returns a supported type. default may be a function,\nlambda, or callable class instance. To specify that a type was not\nhandled by default, raise an exception such as TypeError.\n>>> import orjson, decimal\n>>>\ndef default(obj):\n    if isinstance(obj, decimal.Decimal):\n        return str(obj)\n    raise TypeError\n\n>>> orjson.dumps(decimal.Decimal(\"0.0842389659712649442845\"))\nJSONEncodeError: Type is not JSON serializable: decimal.Decimal\n>>> orjson.dumps(decimal.Decimal(\"0.0842389659712649442845\"), default=default)\nb'\"0.0842389659712649442845\"'\n>>> orjson.dumps({1, 2}, default=default)\norjson.JSONEncodeError: Type is not JSON serializable: set\nThe default callable may return an object that itself\nmust be handled by default up to 254 times before an exception\nis raised.\nIt is important that default raise an exception if a type cannot be handled.\nPython otherwise implicitly returns None, which appears to the caller\nlike a legitimate value and is serialized:\n>>> import orjson, json, rapidjson\n>>>\ndef default(obj):\n    if isinstance(obj, decimal.Decimal):\n        return str(obj)\n\n>>> orjson.dumps({\"set\":{1, 2}}, default=default)\nb'{\"set\":null}'\n>>> json.dumps({\"set\":{1, 2}}, default=default)\n'{\"set\":null}'\n>>> rapidjson.dumps({\"set\":{1, 2}}, default=default)\n'{\"set\":null}'\noption\nTo modify how data is serialized, specify option. Each option is an integer\nconstant in orjson. To specify multiple options, mask them together, e.g.,\noption=orjson.OPT_STRICT_INTEGER | orjson.OPT_NAIVE_UTC.\nOPT_APPEND_NEWLINE\nAppend \\n to the output. This is a convenience and optimization for the\npattern of dumps(...) + \"\\n\". bytes objects are immutable and this\npattern copies the original contents.\n>>> import orjson\n>>> orjson.dumps([])\nb\"[]\"\n>>> orjson.dumps([], option=orjson.OPT_APPEND_NEWLINE)\nb\"[]\\n\"\nOPT_INDENT_2\nPretty-print output with an indent of two spaces. This is equivalent to\nindent=2 in the standard library. Pretty printing is slower and the output\nlarger. orjson is the fastest compared library at pretty printing and has\nmuch less of a slowdown to pretty print than the standard library does. This\noption is compatible with all other options.\n>>> import orjson\n>>> orjson.dumps({\"a\": \"b\", \"c\": {\"d\": True}, \"e\": [1, 2]})\nb'{\"a\":\"b\",\"c\":{\"d\":true},\"e\":[1,2]}'\n>>> orjson.dumps(\n    {\"a\": \"b\", \"c\": {\"d\": True}, \"e\": [1, 2]},\n    option=orjson.OPT_INDENT_2\n)\nb'{\\n  \"a\": \"b\",\\n  \"c\": {\\n    \"d\": true\\n  },\\n  \"e\": [\\n    1,\\n    2\\n  ]\\n}'\nIf displayed, the indentation and linebreaks appear like this:\n{\n  \"a\": \"b\",\n  \"c\": {\n    \"d\": true\n  },\n  \"e\": [\n    1,\n    2\n  ]\n}\nThis measures serializing the github.json fixture as compact (52KiB) or\npretty (64KiB):\n\n\n\nLibrary\ncompact (ms)\npretty (ms)\nvs. orjson\n\n\n\n\norjson\n0.03\n0.04\n1\n\n\nujson\n0.18\n0.19\n4.6\n\n\nrapidjson\n0.1\n0.12\n2.9\n\n\nsimplejson\n0.25\n0.89\n21.4\n\n\njson\n0.18\n0.71\n17\n\n\n\nThis measures serializing the citm_catalog.json fixture, more of a worst\ncase due to the amount of nesting and newlines, as compact (489KiB) or\npretty (1.1MiB):\n\n\n\nLibrary\ncompact (ms)\npretty (ms)\nvs. orjson\n\n\n\n\norjson\n0.59\n0.71\n1\n\n\nujson\n2.9\n3.59\n5\n\n\nrapidjson\n1.81\n2.8\n3.9\n\n\nsimplejson\n10.43\n42.13\n59.1\n\n\njson\n4.16\n33.42\n46.9\n\n\n\nThis can be reproduced using the pyindent script.\nOPT_NAIVE_UTC\nSerialize datetime.datetime objects without a tzinfo as UTC. This\nhas no effect on datetime.datetime objects that have tzinfo set.\n>>> import orjson, datetime\n>>> orjson.dumps(\n        datetime.datetime(1970, 1, 1, 0, 0, 0),\n    )\nb'\"1970-01-01T00:00:00\"'\n>>> orjson.dumps(\n        datetime.datetime(1970, 1, 1, 0, 0, 0),\n        option=orjson.OPT_NAIVE_UTC,\n    )\nb'\"1970-01-01T00:00:00+00:00\"'\nOPT_NON_STR_KEYS\nSerialize dict keys of type other than str. This allows dict keys\nto be one of str, int, float, bool, None, datetime.datetime,\ndatetime.date, datetime.time, enum.Enum, and uuid.UUID. For comparison,\nthe standard library serializes str, int, float, bool or None by\ndefault. orjson benchmarks as being faster at serializing non-str keys\nthan other libraries. This option is slower for str keys than the default.\n>>> import orjson, datetime, uuid\n>>> orjson.dumps(\n        {uuid.UUID(\"7202d115-7ff3-4c81-a7c1-2a1f067b1ece\"): [1, 2, 3]},\n        option=orjson.OPT_NON_STR_KEYS,\n    )\nb'{\"7202d115-7ff3-4c81-a7c1-2a1f067b1ece\":[1,2,3]}'\n>>> orjson.dumps(\n        {datetime.datetime(1970, 1, 1, 0, 0, 0): [1, 2, 3]},\n        option=orjson.OPT_NON_STR_KEYS | orjson.OPT_NAIVE_UTC,\n    )\nb'{\"1970-01-01T00:00:00+00:00\":[1,2,3]}'\nThese types are generally serialized how they would be as\nvalues, e.g., datetime.datetime is still an RFC 3339 string and respects\noptions affecting it. The exception is that int serialization does not\nrespect OPT_STRICT_INTEGER.\nThis option has the risk of creating duplicate keys. This is because non-str\nobjects may serialize to the same str as an existing key, e.g.,\n{\"1\": true, 1: false}. The last key to be inserted to the dict will be\nserialized last and a JSON deserializer will presumably take the last\noccurrence of a key (in the above, false). The first value will be lost.\nThis option is compatible with orjson.OPT_SORT_KEYS. If sorting is used,\nnote the sort is unstable and will be unpredictable for duplicate keys.\n>>> import orjson, datetime\n>>> orjson.dumps(\n    {\"other\": 1, datetime.date(1970, 1, 5): 2, datetime.date(1970, 1, 3): 3},\n    option=orjson.OPT_NON_STR_KEYS | orjson.OPT_SORT_KEYS\n)\nb'{\"1970-01-03\":3,\"1970-01-05\":2,\"other\":1}'\nThis measures serializing 589KiB of JSON comprising a list of 100 dict\nin which each dict has both 365 randomly-sorted int keys representing epoch\ntimestamps as well as one str key and the value for each key is a\nsingle integer. In \"str keys\", the keys were converted to str before\nserialization, and orjson still specifes option=orjson.OPT_NON_STR_KEYS\n(which is always somewhat slower).\n\n\n\nLibrary\nstr keys (ms)\nint keys (ms)\nint keys sorted (ms)\n\n\n\n\norjson\n1.53\n2.16\n4.29\n\n\nujson\n3.07\n5.65\n\n\n\nrapidjson\n4.29\n\n\n\n\nsimplejson\n11.24\n14.50\n21.86\n\n\njson\n7.17\n8.49\n\n\n\n\nujson is blank for sorting because it segfaults. json is blank because it\nraises TypeError on attempting to sort before converting all keys to str.\nrapidjson is blank because it does not support non-str keys. This can\nbe reproduced using the pynonstr script.\nOPT_OMIT_MICROSECONDS\nDo not serialize the microsecond field on datetime.datetime and\ndatetime.time instances.\n>>> import orjson, datetime\n>>> orjson.dumps(\n        datetime.datetime(1970, 1, 1, 0, 0, 0, 1),\n    )\nb'\"1970-01-01T00:00:00.000001\"'\n>>> orjson.dumps(\n        datetime.datetime(1970, 1, 1, 0, 0, 0, 1),\n        option=orjson.OPT_OMIT_MICROSECONDS,\n    )\nb'\"1970-01-01T00:00:00\"'\nOPT_PASSTHROUGH_DATACLASS\nPassthrough dataclasses.dataclass instances to default. This allows\ncustomizing their output but is much slower.\n>>> import orjson, dataclasses\n>>>\n@dataclasses.dataclass\nclass User:\n    id: str\n    name: str\n    password: str\n\ndef default(obj):\n    if isinstance(obj, User):\n        return {\"id\": obj.id, \"name\": obj.name}\n    raise TypeError\n\n>>> orjson.dumps(User(\"3b1\", \"asd\", \"zxc\"))\nb'{\"id\":\"3b1\",\"name\":\"asd\",\"password\":\"zxc\"}'\n>>> orjson.dumps(User(\"3b1\", \"asd\", \"zxc\"), option=orjson.OPT_PASSTHROUGH_DATACLASS)\nTypeError: Type is not JSON serializable: User\n>>> orjson.dumps(\n        User(\"3b1\", \"asd\", \"zxc\"),\n        option=orjson.OPT_PASSTHROUGH_DATACLASS,\n        default=default,\n    )\nb'{\"id\":\"3b1\",\"name\":\"asd\"}'\nOPT_PASSTHROUGH_DATETIME\nPassthrough datetime.datetime, datetime.date, and datetime.time instances\nto default. This allows serializing datetimes to a custom format, e.g.,\nHTTP dates:\n>>> import orjson, datetime\n>>>\ndef default(obj):\n    if isinstance(obj, datetime.datetime):\n        return obj.strftime(\"%a, %d %b %Y %H:%M:%S GMT\")\n    raise TypeError\n\n>>> orjson.dumps({\"created_at\": datetime.datetime(1970, 1, 1)})\nb'{\"created_at\":\"1970-01-01T00:00:00\"}'\n>>> orjson.dumps({\"created_at\": datetime.datetime(1970, 1, 1)}, option=orjson.OPT_PASSTHROUGH_DATETIME)\nTypeError: Type is not JSON serializable: datetime.datetime\n>>> orjson.dumps(\n        {\"created_at\": datetime.datetime(1970, 1, 1)},\n        option=orjson.OPT_PASSTHROUGH_DATETIME,\n        default=default,\n    )\nb'{\"created_at\":\"Thu, 01 Jan 1970 00:00:00 GMT\"}'\nThis does not affect datetimes in dict keys if using OPT_NON_STR_KEYS.\nOPT_PASSTHROUGH_SUBCLASS\nPassthrough subclasses of builtin types to default.\n>>> import orjson\n>>>\nclass Secret(str):\n    pass\n\ndef default(obj):\n    if isinstance(obj, Secret):\n        return \"******\"\n    raise TypeError\n\n>>> orjson.dumps(Secret(\"zxc\"))\nb'\"zxc\"'\n>>> orjson.dumps(Secret(\"zxc\"), option=orjson.OPT_PASSTHROUGH_SUBCLASS)\nTypeError: Type is not JSON serializable: Secret\n>>> orjson.dumps(Secret(\"zxc\"), option=orjson.OPT_PASSTHROUGH_SUBCLASS, default=default)\nb'\"******\"'\nThis does not affect serializing subclasses as dict keys if using\nOPT_NON_STR_KEYS.\nOPT_SERIALIZE_DATACLASS\nThis is deprecated and has no effect in version 3. In version 2 this was\nrequired to serialize  dataclasses.dataclass instances. For more, see\ndataclass.\nOPT_SERIALIZE_NUMPY\nSerialize numpy.ndarray instances. For more, see\nnumpy.\nOPT_SERIALIZE_UUID\nThis is deprecated and has no effect in version 3. In version 2 this was\nrequired to serialize uuid.UUID instances. For more, see\nUUID.\nOPT_SORT_KEYS\nSerialize dict keys in sorted order. The default is to serialize in an\nunspecified order. This is equivalent to sort_keys=True in the standard\nlibrary.\nThis can be used to ensure the order is deterministic for hashing or tests.\nIt has a substantial performance penalty and is not recommended in general.\n>>> import orjson\n>>> orjson.dumps({\"b\": 1, \"c\": 2, \"a\": 3})\nb'{\"b\":1,\"c\":2,\"a\":3}'\n>>> orjson.dumps({\"b\": 1, \"c\": 2, \"a\": 3}, option=orjson.OPT_SORT_KEYS)\nb'{\"a\":3,\"b\":1,\"c\":2}'\nThis measures serializing the twitter.json fixture unsorted and sorted:\n\n\n\nLibrary\nunsorted (ms)\nsorted (ms)\nvs. orjson\n\n\n\n\norjson\n0.32\n0.54\n1\n\n\nujson\n1.6\n2.07\n3.8\n\n\nrapidjson\n1.12\n1.65\n3.1\n\n\nsimplejson\n2.25\n3.13\n5.8\n\n\njson\n1.78\n2.32\n4.3\n\n\n\nThe benchmark can be reproduced using the pysort script.\nThe sorting is not collation/locale-aware:\n>>> import orjson\n>>> orjson.dumps({\"a\": 1, \"\u00e4\": 2, \"A\": 3}, option=orjson.OPT_SORT_KEYS)\nb'{\"A\":3,\"a\":1,\"\\xc3\\xa4\":2}'\nThis is the same sorting behavior as the standard library, rapidjson,\nsimplejson, and ujson.\ndataclass also serialize as maps but this has no effect on them.\nOPT_STRICT_INTEGER\nEnforce 53-bit limit on integers. The limit is otherwise 64 bits, the same as\nthe Python standard library. For more, see int.\nOPT_UTC_Z\nSerialize a UTC timezone on datetime.datetime instances as Z instead\nof +00:00.\n>>> import orjson, datetime, zoneinfo\n>>> orjson.dumps(\n        datetime.datetime(1970, 1, 1, 0, 0, 0, tzinfo=zoneinfo.ZoneInfo(\"UTC\")),\n    )\nb'\"1970-01-01T00:00:00+00:00\"'\n>>> orjson.dumps(\n        datetime.datetime(1970, 1, 1, 0, 0, 0, tzinfo=zoneinfo.ZoneInfo(\"UTC\")),\n        option=orjson.OPT_UTC_Z\n    )\nb'\"1970-01-01T00:00:00Z\"'\nFragment\norjson.Fragment includes already-serialized JSON in a document. This is an\nefficient way to include JSON blobs from a cache, JSONB field, or separately\nserialized object without first deserializing to Python objects via loads().\n>>> import orjson\n>>> orjson.dumps({\"key\": \"zxc\", \"data\": orjson.Fragment(b'{\"a\": \"b\", \"c\": 1}')})\nb'{\"key\":\"zxc\",\"data\":{\"a\": \"b\", \"c\": 1}}'\nIt does no reformatting: orjson.OPT_INDENT_2 will not affect a\ncompact blob nor will a pretty-printed JSON blob be rewritten as compact.\nThe input must be bytes or str and given as a positional argument.\nThis raises orjson.JSONEncodeError if a str is given and the input is\nnot valid UTF-8. It otherwise does no validation and it is possible to\nwrite invalid JSON. This does not escape characters. The implementation is\ntested to not crash if given invalid strings or invalid JSON.\nThis is similar to RawJSON in rapidjson.\nDeserialize\ndef loads(__obj: Union[bytes, bytearray, memoryview, str]) -> Any: ...\nloads() deserializes JSON to Python objects. It deserializes to dict,\nlist, int, float, str, bool, and None objects.\nbytes, bytearray, memoryview, and str input are accepted. If the input\nexists as a memoryview, bytearray, or bytes object, it is recommended to\npass these directly rather than creating an unnecessary str object. That is,\norjson.loads(b\"{}\") instead of orjson.loads(b\"{}\".decode(\"utf-8\")). This\nhas lower memory usage and lower latency.\nThe input must be valid UTF-8.\norjson maintains a cache of map keys for the duration of the process. This\ncauses a net reduction in memory usage by avoiding duplicate strings. The\nkeys must be at most 64 bytes to be cached and 1024 entries are stored.\nThe global interpreter lock (GIL) is held for the duration of the call.\nIt raises JSONDecodeError if given an invalid type or invalid\nJSON. This includes if the input contains NaN, Infinity, or -Infinity,\nwhich the standard library allows, but is not valid JSON.\nJSONDecodeError is a subclass of json.JSONDecodeError and ValueError.\nThis is for compatibility with the standard library.\nTypes\ndataclass\norjson serializes instances of dataclasses.dataclass natively. It serializes\ninstances 40-50x as fast as other libraries and avoids a severe slowdown seen\nin other libraries compared to serializing dict.\nIt is supported to pass all variants of dataclasses, including dataclasses\nusing __slots__, frozen dataclasses, those with optional or default\nattributes, and subclasses. There is a performance benefit to not\nusing __slots__.\n\n\n\nLibrary\ndict (ms)\ndataclass (ms)\nvs. orjson\n\n\n\n\norjson\n1.40\n1.60\n1\n\n\nujson\n\n\n\n\n\nrapidjson\n3.64\n68.48\n42\n\n\nsimplejson\n14.21\n92.18\n57\n\n\njson\n13.28\n94.90\n59\n\n\n\nThis measures serializing 555KiB of JSON, orjson natively and other libraries\nusing default to serialize the output of dataclasses.asdict(). This can be\nreproduced using the pydataclass script.\nDataclasses are serialized as maps, with every attribute serialized and in\nthe order given on class definition:\n>>> import dataclasses, orjson, typing\n\n@dataclasses.dataclass\nclass Member:\n    id: int\n    active: bool = dataclasses.field(default=False)\n\n@dataclasses.dataclass\nclass Object:\n    id: int\n    name: str\n    members: typing.List[Member]\n\n>>> orjson.dumps(Object(1, \"a\", [Member(1, True), Member(2)]))\nb'{\"id\":1,\"name\":\"a\",\"members\":[{\"id\":1,\"active\":true},{\"id\":2,\"active\":false}]}'\ndatetime\norjson serializes datetime.datetime objects to\nRFC 3339 format,\ne.g., \"1970-01-01T00:00:00+00:00\". This is a subset of ISO 8601 and is\ncompatible with isoformat() in the standard library.\n>>> import orjson, datetime, zoneinfo\n>>> orjson.dumps(\n    datetime.datetime(2018, 12, 1, 2, 3, 4, 9, tzinfo=zoneinfo.ZoneInfo(\"Australia/Adelaide\"))\n)\nb'\"2018-12-01T02:03:04.000009+10:30\"'\n>>> orjson.dumps(\n    datetime.datetime(2100, 9, 1, 21, 55, 2).replace(tzinfo=zoneinfo.ZoneInfo(\"UTC\"))\n)\nb'\"2100-09-01T21:55:02+00:00\"'\n>>> orjson.dumps(\n    datetime.datetime(2100, 9, 1, 21, 55, 2)\n)\nb'\"2100-09-01T21:55:02\"'\ndatetime.datetime supports instances with a tzinfo that is None,\ndatetime.timezone.utc, a timezone instance from the python3.9+ zoneinfo\nmodule, or a timezone instance from the third-party pendulum, pytz, or\ndateutil/arrow libraries.\nIt is fastest to use the standard library's zoneinfo.ZoneInfo for timezones.\ndatetime.time objects must not have a tzinfo.\n>>> import orjson, datetime\n>>> orjson.dumps(datetime.time(12, 0, 15, 290))\nb'\"12:00:15.000290\"'\ndatetime.date objects will always serialize.\n>>> import orjson, datetime\n>>> orjson.dumps(datetime.date(1900, 1, 2))\nb'\"1900-01-02\"'\nErrors with tzinfo result in JSONEncodeError being raised.\nTo disable serialization of datetime objects specify the option\norjson.OPT_PASSTHROUGH_DATETIME.\nTo use \"Z\" suffix instead of \"+00:00\" to indicate UTC (\"Zulu\") time, use the option\norjson.OPT_UTC_Z.\nTo assume datetimes without timezone are UTC, use the option orjson.OPT_NAIVE_UTC.\nenum\norjson serializes enums natively. Options apply to their values.\n>>> import enum, datetime, orjson\n>>>\nclass DatetimeEnum(enum.Enum):\n    EPOCH = datetime.datetime(1970, 1, 1, 0, 0, 0)\n>>> orjson.dumps(DatetimeEnum.EPOCH)\nb'\"1970-01-01T00:00:00\"'\n>>> orjson.dumps(DatetimeEnum.EPOCH, option=orjson.OPT_NAIVE_UTC)\nb'\"1970-01-01T00:00:00+00:00\"'\nEnums with members that are not supported types can be serialized using\ndefault:\n>>> import enum, orjson\n>>>\nclass Custom:\n    def __init__(self, val):\n        self.val = val\n\ndef default(obj):\n    if isinstance(obj, Custom):\n        return obj.val\n    raise TypeError\n\nclass CustomEnum(enum.Enum):\n    ONE = Custom(1)\n\n>>> orjson.dumps(CustomEnum.ONE, default=default)\nb'1'\nfloat\norjson serializes and deserializes double precision floats with no loss of\nprecision and consistent rounding.\norjson.dumps() serializes Nan, Infinity, and -Infinity, which are not\ncompliant JSON, as null:\n>>> import orjson, ujson, rapidjson, json\n>>> orjson.dumps([float(\"NaN\"), float(\"Infinity\"), float(\"-Infinity\")])\nb'[null,null,null]'\n>>> ujson.dumps([float(\"NaN\"), float(\"Infinity\"), float(\"-Infinity\")])\nOverflowError: Invalid Inf value when encoding double\n>>> rapidjson.dumps([float(\"NaN\"), float(\"Infinity\"), float(\"-Infinity\")])\n'[NaN,Infinity,-Infinity]'\n>>> json.dumps([float(\"NaN\"), float(\"Infinity\"), float(\"-Infinity\")])\n'[NaN, Infinity, -Infinity]'\nint\norjson serializes and deserializes 64-bit integers by default. The range\nsupported is a signed 64-bit integer's minimum (-9223372036854775807) to\nan unsigned 64-bit integer's maximum (18446744073709551615). This\nis widely compatible, but there are implementations\nthat only support 53-bits for integers, e.g.,\nweb browsers. For those implementations, dumps() can be configured to\nraise a JSONEncodeError on values exceeding the 53-bit range.\n>>> import orjson\n>>> orjson.dumps(9007199254740992)\nb'9007199254740992'\n>>> orjson.dumps(9007199254740992, option=orjson.OPT_STRICT_INTEGER)\nJSONEncodeError: Integer exceeds 53-bit range\n>>> orjson.dumps(-9007199254740992, option=orjson.OPT_STRICT_INTEGER)\nJSONEncodeError: Integer exceeds 53-bit range\nnumpy\norjson natively serializes numpy.ndarray and individual\nnumpy.float64, numpy.float32,\nnumpy.int64, numpy.int32, numpy.int16, numpy.int8,\nnumpy.uint64, numpy.uint32, numpy.uint16, numpy.uint8,\nnumpy.uintp, numpy.intp, numpy.datetime64, and numpy.bool\ninstances.\norjson is faster than all compared libraries at serializing\nnumpy instances. Serializing numpy data requires specifying\noption=orjson.OPT_SERIALIZE_NUMPY.\n>>> import orjson, numpy\n>>> orjson.dumps(\n        numpy.array([[1, 2, 3], [4, 5, 6]]),\n        option=orjson.OPT_SERIALIZE_NUMPY,\n)\nb'[[1,2,3],[4,5,6]]'\nThe array must be a contiguous C array (C_CONTIGUOUS) and one of the\nsupported datatypes.\nNote a difference between serializing numpy.float32 using ndarray.tolist()\nor orjson.dumps(..., option=orjson.OPT_SERIALIZE_NUMPY): tolist() converts\nto a double before serializing and orjson's native path does not. This\ncan result in different rounding.\nnumpy.datetime64 instances are serialized as RFC 3339 strings and\ndatetime options affect them.\n>>> import orjson, numpy\n>>> orjson.dumps(\n        numpy.datetime64(\"2021-01-01T00:00:00.172\"),\n        option=orjson.OPT_SERIALIZE_NUMPY,\n)\nb'\"2021-01-01T00:00:00.172000\"'\n>>> orjson.dumps(\n        numpy.datetime64(\"2021-01-01T00:00:00.172\"),\n        option=(\n            orjson.OPT_SERIALIZE_NUMPY |\n            orjson.OPT_NAIVE_UTC |\n            orjson.OPT_OMIT_MICROSECONDS\n        ),\n)\nb'\"2021-01-01T00:00:00+00:00\"'\nIf an array is not a contiguous C array, contains an unsupported datatype,\nor contains a numpy.datetime64 using an unsupported representation\n(e.g., picoseconds), orjson falls through to default. In default,\nobj.tolist() can be specified. If an array is malformed, which\nis not expected, orjson.JSONEncodeError is raised.\nThis measures serializing 92MiB of JSON from an numpy.ndarray with\ndimensions of (50000, 100) and numpy.float64 values:\n\n\n\nLibrary\nLatency (ms)\nRSS diff (MiB)\nvs. orjson\n\n\n\n\norjson\n194\n99\n1.0\n\n\nujson\n\n\n\n\n\nrapidjson\n3,048\n309\n15.7\n\n\nsimplejson\n3,023\n297\n15.6\n\n\njson\n3,133\n297\n16.1\n\n\n\nThis measures serializing 100MiB of JSON from an numpy.ndarray with\ndimensions of (100000, 100) and numpy.int32 values:\n\n\n\nLibrary\nLatency (ms)\nRSS diff (MiB)\nvs. orjson\n\n\n\n\norjson\n178\n115\n1.0\n\n\nujson\n\n\n\n\n\nrapidjson\n1,512\n551\n8.5\n\n\nsimplejson\n1,606\n504\n9.0\n\n\njson\n1,506\n503\n8.4\n\n\n\nThis measures serializing 105MiB of JSON from an numpy.ndarray with\ndimensions of (100000, 200) and numpy.bool values:\n\n\n\nLibrary\nLatency (ms)\nRSS diff (MiB)\nvs. orjson\n\n\n\n\norjson\n157\n120\n1.0\n\n\nujson\n\n\n\n\n\nrapidjson\n710\n327\n4.5\n\n\nsimplejson\n931\n398\n5.9\n\n\njson\n996\n400\n6.3\n\n\n\nIn these benchmarks, orjson serializes natively, ujson is blank because it\ndoes not support a default parameter, and the other libraries serialize\nndarray.tolist() via default. The RSS column measures peak memory\nusage during serialization. This can be reproduced using the pynumpy script.\norjson does not have an installation or compilation dependency on numpy. The\nimplementation is independent, reading numpy.ndarray using\nPyArrayInterface.\nstr\norjson is strict about UTF-8 conformance. This is stricter than the standard\nlibrary's json module, which will serialize and deserialize UTF-16 surrogates,\ne.g., \"\\ud800\", that are invalid UTF-8.\nIf orjson.dumps() is given a str that does not contain valid UTF-8,\norjson.JSONEncodeError is raised. If loads() receives invalid UTF-8,\norjson.JSONDecodeError is raised.\norjson and rapidjson are the only compared JSON libraries to consistently\nerror on bad input.\n>>> import orjson, ujson, rapidjson, json\n>>> orjson.dumps('\\ud800')\nJSONEncodeError: str is not valid UTF-8: surrogates not allowed\n>>> ujson.dumps('\\ud800')\nUnicodeEncodeError: 'utf-8' codec ...\n>>> rapidjson.dumps('\\ud800')\nUnicodeEncodeError: 'utf-8' codec ...\n>>> json.dumps('\\ud800')\n'\"\\\\ud800\"'\n>>> orjson.loads('\"\\\\ud800\"')\nJSONDecodeError: unexpected end of hex escape at line 1 column 8: line 1 column 1 (char 0)\n>>> ujson.loads('\"\\\\ud800\"')\n''\n>>> rapidjson.loads('\"\\\\ud800\"')\nValueError: Parse error at offset 1: The surrogate pair in string is invalid.\n>>> json.loads('\"\\\\ud800\"')\n'\\ud800'\nTo make a best effort at deserializing bad input, first decode bytes using\nthe replace or lossy argument for errors:\n>>> import orjson\n>>> orjson.loads(b'\"\\xed\\xa0\\x80\"')\nJSONDecodeError: str is not valid UTF-8: surrogates not allowed\n>>> orjson.loads(b'\"\\xed\\xa0\\x80\"'.decode(\"utf-8\", \"replace\"))\n'\ufffd\ufffd\ufffd'\nuuid\norjson serializes uuid.UUID instances to\nRFC 4122 format, e.g.,\n\"f81d4fae-7dec-11d0-a765-00a0c91e6bf6\".\n>>> import orjson, uuid\n>>> orjson.dumps(uuid.UUID('f81d4fae-7dec-11d0-a765-00a0c91e6bf6'))\nb'\"f81d4fae-7dec-11d0-a765-00a0c91e6bf6\"'\n>>> orjson.dumps(uuid.uuid5(uuid.NAMESPACE_DNS, \"python.org\"))\nb'\"886313e1-3b8a-5372-9b90-0c9aee199e5d\"'\nTesting\nThe library has comprehensive tests. There are tests against fixtures in the\nJSONTestSuite and\nnativejson-benchmark\nrepositories. It is tested to not crash against the\nBig List of Naughty Strings.\nIt is tested to not leak memory. It is tested to not crash\nagainst and not accept invalid UTF-8. There are integration tests\nexercising the library's use in web servers (gunicorn using multiprocess/forked\nworkers) and when\nmultithreaded. It also uses some tests from the ultrajson library.\norjson is the most correct of the compared libraries. This graph shows how each\nlibrary handles a combined 342 JSON fixtures from the\nJSONTestSuite and\nnativejson-benchmark tests:\n\n\n\nLibrary\nInvalid JSON documents not rejected\nValid JSON documents not deserialized\n\n\n\n\norjson\n0\n0\n\n\nujson\n38\n0\n\n\nrapidjson\n6\n0\n\n\nsimplejson\n13\n0\n\n\njson\n17\n0\n\n\n\nThis shows that all libraries deserialize valid JSON but only orjson\ncorrectly rejects the given invalid JSON fixtures. Errors are largely due to\naccepting invalid strings and numbers.\nThe graph above can be reproduced using the pycorrectness script.\nPerformance\nSerialization and deserialization performance of orjson is better than\nultrajson, rapidjson, simplejson, or json. The benchmarks are done on\nfixtures of real data:\n\n\ntwitter.json, 631.5KiB, results of a search on Twitter for \"\u4e00\", containing\nCJK strings, dictionaries of strings and arrays of dictionaries, indented.\n\n\ngithub.json, 55.8KiB, a GitHub activity feed, containing dictionaries of\nstrings and arrays of dictionaries, not indented.\n\n\ncitm_catalog.json, 1.7MiB, concert data, containing nested dictionaries of\nstrings and arrays of integers, indented.\n\n\ncanada.json, 2.2MiB, coordinates of the Canadian border in GeoJSON\nformat, containing floats and arrays, indented.\n\n\nLatency\ntwitter.json serialization\n\n\n\nLibrary\nMedian latency (milliseconds)\nOperations per second\nRelative (latency)\n\n\n\n\norjson\n0.33\n3069.4\n1\n\n\nujson\n1.68\n592.8\n5.15\n\n\nrapidjson\n1.12\n891\n3.45\n\n\nsimplejson\n2.29\n436.2\n7.03\n\n\njson\n1.8\n556.6\n5.52\n\n\n\ntwitter.json deserialization\n\n\n\nLibrary\nMedian latency (milliseconds)\nOperations per second\nRelative (latency)\n\n\n\n\norjson\n0.81\n1237.6\n1\n\n\nujson\n1.87\n533.9\n2.32\n\n\nrapidjson\n2.97\n335.8\n3.67\n\n\nsimplejson\n2.15\n463.8\n2.66\n\n\njson\n2.45\n408.2\n3.03\n\n\n\ngithub.json serialization\n\n\n\nLibrary\nMedian latency (milliseconds)\nOperations per second\nRelative (latency)\n\n\n\n\norjson\n0.03\n28817.3\n1\n\n\nujson\n0.18\n5478.2\n5.26\n\n\nrapidjson\n0.1\n9686.4\n2.98\n\n\nsimplejson\n0.26\n3901.3\n7.39\n\n\njson\n0.18\n5437\n5.27\n\n\n\ngithub.json deserialization\n\n\n\nLibrary\nMedian latency (milliseconds)\nOperations per second\nRelative (latency)\n\n\n\n\norjson\n0.07\n15270\n1\n\n\nujson\n0.19\n5374.8\n2.84\n\n\nrapidjson\n0.17\n5854.9\n2.59\n\n\nsimplejson\n0.15\n6707.4\n2.27\n\n\njson\n0.16\n6397.3\n2.39\n\n\n\ncitm_catalog.json serialization\n\n\n\nLibrary\nMedian latency (milliseconds)\nOperations per second\nRelative (latency)\n\n\n\n\norjson\n0.58\n1722.5\n1\n\n\nujson\n2.89\n345.6\n4.99\n\n\nrapidjson\n1.83\n546.4\n3.15\n\n\nsimplejson\n10.39\n95.9\n17.89\n\n\njson\n3.93\n254.6\n6.77\n\n\n\ncitm_catalog.json deserialization\n\n\n\nLibrary\nMedian latency (milliseconds)\nOperations per second\nRelative (latency)\n\n\n\n\norjson\n1.76\n569.2\n1\n\n\nujson\n3.5\n284.3\n1.99\n\n\nrapidjson\n5.77\n173.2\n3.28\n\n\nsimplejson\n5.13\n194.7\n2.92\n\n\njson\n4.99\n200.5\n2.84\n\n\n\ncanada.json serialization\n\n\n\nLibrary\nMedian latency (milliseconds)\nOperations per second\nRelative (latency)\n\n\n\n\norjson\n3.62\n276.3\n1\n\n\nujson\n14.16\n70.6\n3.91\n\n\nrapidjson\n33.64\n29.7\n9.29\n\n\nsimplejson\n57.46\n17.4\n15.88\n\n\njson\n35.7\n28\n9.86\n\n\n\ncanada.json deserialization\n\n\n\nLibrary\nMedian latency (milliseconds)\nOperations per second\nRelative (latency)\n\n\n\n\norjson\n3.89\n256.6\n1\n\n\nujson\n8.73\n114.3\n2.24\n\n\nrapidjson\n23.33\n42.8\n5.99\n\n\nsimplejson\n23.99\n41.7\n6.16\n\n\njson\n21.1\n47.4\n5.42\n\n\n\nMemory\norjson as of 3.7.0 has higher baseline memory usage than other libraries\ndue to a persistent buffer used for parsing. Incremental memory usage when\ndeserializing is similar to the standard library and other third-party\nlibraries.\nThis measures, in the first column, RSS after importing a library and reading\nthe fixture, and in the second column, increases in RSS after repeatedly\ncalling loads() on the fixture.\ntwitter.json\n\n\n\nLibrary\nimport, read() RSS (MiB)\nloads() increase in RSS (MiB)\n\n\n\n\norjson\n21.8\n2.8\n\n\nujson\n14.3\n4.8\n\n\nrapidjson\n14.9\n4.6\n\n\nsimplejson\n13.4\n2.4\n\n\njson\n13.1\n2.3\n\n\n\ngithub.json\n\n\n\nLibrary\nimport, read() RSS (MiB)\nloads() increase in RSS (MiB)\n\n\n\n\norjson\n21.2\n0.5\n\n\nujson\n13.6\n0.6\n\n\nrapidjson\n14.1\n0.5\n\n\nsimplejson\n12.5\n0.3\n\n\njson\n12.4\n0.3\n\n\n\ncitm_catalog.json\n\n\n\nLibrary\nimport, read() RSS (MiB)\nloads() increase in RSS (MiB)\n\n\n\n\norjson\n23\n10.6\n\n\nujson\n15.2\n11.2\n\n\nrapidjson\n15.8\n29.7\n\n\nsimplejson\n14.4\n24.7\n\n\njson\n13.9\n24.7\n\n\n\ncanada.json\n\n\n\nLibrary\nimport, read() RSS (MiB)\nloads() increase in RSS (MiB)\n\n\n\n\norjson\n23.2\n21.3\n\n\nujson\n15.6\n19.2\n\n\nrapidjson\n16.3\n23.4\n\n\nsimplejson\n15\n21.1\n\n\njson\n14.3\n20.9\n\n\n\nReproducing\nThe above was measured using Python 3.10.5 on Linux (amd64) with\norjson 3.7.9, ujson 5.4.0, python-rapidson 1.8, and simplejson 3.17.6.\nThe latency results can be reproduced using the pybench and graph\nscripts. The memory results can be reproduced using the pymem script.\nQuestions\nWhy can't I install it from PyPI?\nProbably pip needs to be upgraded to version 20.3 or later to support\nthe latest manylinux_x_y or universal2 wheel formats.\n\"Cargo, the Rust package manager, is not installed or is not on PATH.\"\nThis happens when there are no binary wheels (like manylinux) for your\nplatform on PyPI. You can install Rust through\nrustup or a package manager and then it will compile.\nWill it deserialize to dataclasses, UUIDs, decimals, etc or support object_hook?\nNo. This requires a schema specifying what types are expected and how to\nhandle errors etc. This is addressed by data validation libraries a\nlevel above this.\nWill it serialize to str?\nNo. bytes is the correct type for a serialized blob.\nWill it support PyPy?\nProbably not.\nPackaging\nTo package orjson requires at least Rust 1.60\nand the maturin build tool. The recommended\nbuild command is:\nmaturin build --release --strip\nIt benefits from also having a C build environment to compile a faster\ndeserialization backend. See this project's manylinux_2_28 builds for an\nexample using clang and LTO.\nThe project's own CI tests against nightly-2023-06-30 and stable 1.60. It\nis prudent to pin the nightly version because that channel can introduce\nbreaking changes.\norjson is tested for amd64, aarch64, arm7, ppc64le, and s390x on Linux. It\nis tested for amd64 on macOS and cross-compiles for aarch64. For Windows\nit is tested on amd64 and i686.\nThere are no runtime dependencies other than libc.\nThe source distribution on PyPI contains all dependencies' source and can be\nbuilt without network access. The file can be downloaded from\nhttps://files.pythonhosted.org/packages/source/o/orjson/orjson-${version}.tar.gz.\norjson's tests are included in the source distribution on PyPI. The\nrequirements to run the tests are specified in test/requirements.txt. The\ntests should be run as part of the build. It can be run with\npytest -q test.\nLicense\norjson was written by ijl <ijl@mailbox.org>, copyright 2018 - 2023, licensed\nunder both the Apache 2 and MIT licenses.\n\n\n"}, {"name": "opt-einsum", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nOptimized Einsum\nOptimized Einsum: A tensor contraction order optimizer\nExample usage\nFeatures\nInstallation\nCitation\nContributing\n\n\n\n\n\nREADME.md\n\n\n\n\nOptimized Einsum\n\n\n\n\n\n\n\nOptimized Einsum: A tensor contraction order optimizer\nOptimized einsum can significantly reduce the overall execution time of einsum-like expressions (e.g.,\nnp.einsum,\ndask.array.einsum,\npytorch.einsum,\ntensorflow.einsum,\n)\nby optimizing the expression's contraction order and dispatching many\noperations to canonical BLAS, cuBLAS, or other specialized routines.\nOptimized\neinsum is agnostic to the backend and can handle NumPy, Dask, PyTorch,\nTensorflow, CuPy, Sparse, Theano, JAX, and Autograd arrays as well as potentially\nany library which conforms to a standard API. See the\ndocumentation for more\ninformation.\nExample usage\nThe opt_einsum.contract\nfunction can often act as a drop-in replacement for einsum\nfunctions without further changes to the code while providing superior performance.\nHere, a tensor contraction is performed with and without optimization:\nimport numpy as np\nfrom opt_einsum import contract\n\nN = 10\nC = np.random.rand(N, N)\nI = np.random.rand(N, N, N, N)\n\n%timeit np.einsum('pi,qj,ijkl,rk,sl->pqrs', C, C, I, C, C)\n1 loops, best of 3: 934 ms per loop\n\n%timeit contract('pi,qj,ijkl,rk,sl->pqrs', C, C, I, C, C)\n1000 loops, best of 3: 324 us per loop\nIn this particular example, we see a ~3000x performance improvement which is\nnot uncommon when compared against unoptimized contractions. See the backend\nexamples\nfor more information on using other backends.\nFeatures\nThe algorithms found in this repository often power the einsum optimizations\nin many of the above projects. For example, the optimization of np.einsum\nhas been passed upstream and most of the same features that can be found in\nthis repository can be enabled with np.einsum(..., optimize=True). However,\nthis repository often has more up to date algorithms for complex contractions.\nThe following capabilities are enabled by opt_einsum:\n\nInspect detailed information about the path chosen.\nPerform contractions with numerous backends, including on the GPU and with libraries such as TensorFlow and PyTorch.\nGenerate reusable expressions, potentially with constant tensors, that can be compiled for greater performance.\nUse an arbitrary number of indices to find contractions for hundreds or even thousands of tensors.\nShare intermediate computations among multiple contractions.\nCompute gradients of tensor contractions using autograd or jax\n\nPlease see the documentation for more features!\nInstallation\nopt_einsum can either be installed via pip install opt_einsum or from conda conda install opt_einsum -c conda-forge.\nSee the installation documentation for further methods.\nCitation\nIf this code has benefited your research, please support us by citing:\nDaniel G. A. Smith and Johnnie Gray, opt_einsum - A Python package for optimizing contraction order for einsum-like expressions. Journal of Open Source Software, 2018, 3(26), 753\nDOI: https://doi.org/10.21105/joss.00753\nContributing\nAll contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.\nA detailed overview on how to contribute can be found in the contributing guide.\n\n\n"}, {"name": "openpyxl", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n", "description": "Read/write Excel 2010 xlsx/xlsm files", "category": "Excel"}, {"name": "opencv-python", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nOpenCV on Wheels\nInstallation and Usage\nFrequently Asked Questions\nDocumentation for opencv-python\nCI build process\nManual builds\nManual debug builds\nSource distributions\nLicensing\nVersioning\nReleases\nDevelopment builds\nManylinux wheels\nSupported Python versions\nBackward compatibility\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\nOpenCV on Wheels\n\nInstallation and Usage\n\n\nFrequently Asked Questions\nDocumentation for opencv-python\n\nCI build process\nManual builds\n\nManual debug builds\nSource distributions\n\n\nLicensing\nVersioning\nReleases\nDevelopment builds\nManylinux wheels\nSupported Python versions\nBackward compatibility\n\n\n\nOpenCV on Wheels\nPre-built CPU-only OpenCV packages for Python.\nCheck the manual build section if you wish to compile the bindings from source to enable additional modules such as CUDA.\nInstallation and Usage\n\n\nIf you have previous/other manually installed (= not installed via pip) version of OpenCV installed (e.g. cv2 module in the root of Python's site-packages), remove it before installation to avoid conflicts.\n\n\nMake sure that your pip version is up-to-date (19.3 is the minimum supported version): pip install --upgrade pip. Check version with pip -V. For example Linux distributions ship usually with very old pip versions which cause a lot of unexpected problems especially with the manylinux format.\n\n\nSelect the correct package for your environment:\nThere are four different packages (see options 1, 2, 3 and 4 below) and you should SELECT ONLY ONE OF THEM. Do not install multiple different packages in the same environment. There is no plugin architecture: all the packages use the same namespace (cv2). If you installed multiple different packages in the same environment, uninstall them all with pip uninstall and reinstall only one package.\na. Packages for standard desktop environments (Windows, macOS, almost any GNU/Linux distribution)\n\nOption 1 - Main modules package: pip install opencv-python\nOption 2 - Full package (contains both main modules and contrib/extra modules): pip install opencv-contrib-python (check contrib/extra modules listing from OpenCV documentation)\n\nb. Packages for server (headless) environments (such as Docker, cloud environments etc.), no GUI library dependencies\nThese packages are smaller than the two other packages above because they do not contain any GUI functionality (not compiled with Qt / other GUI components). This means that the packages avoid a heavy dependency chain to X11 libraries and you will have for example smaller Docker images as a result. You should always use these packages if you do not use cv2.imshow et al. or you are using some other package (such as PyQt) than OpenCV to create your GUI.\n\nOption 3 - Headless main modules package: pip install opencv-python-headless\nOption 4 - Headless full package (contains both main modules and contrib/extra modules): pip install opencv-contrib-python-headless (check contrib/extra modules listing from OpenCV documentation)\n\n\n\nImport the package:\nimport cv2\nAll packages contain Haar cascade files. cv2.data.haarcascades can be used as a shortcut to the data folder. For example:\ncv2.CascadeClassifier(cv2.data.haarcascades + \"haarcascade_frontalface_default.xml\")\n\n\nRead OpenCV documentation\n\n\nBefore opening a new issue, read the FAQ below and have a look at the other issues which are already open.\n\n\nFrequently Asked Questions\nQ: Do I need to install also OpenCV separately?\nA: No, the packages are special wheel binary packages and they already contain statically built OpenCV binaries.\nQ: Pip install fails with ModuleNotFoundError: No module named 'skbuild'?\nSince opencv-python version 4.3.0.*, manylinux1 wheels were replaced by manylinux2014 wheels. If your pip is too old, it will try to use the new source distribution introduced in 4.3.0.38 to manually build OpenCV because it does not know how to install manylinux2014 wheels. However, source build will also fail because of too old pip because it does not understand build dependencies in pyproject.toml. To use the new manylinux2014 pre-built wheels (or to build from source), your pip version must be >= 19.3. Please upgrade pip with pip install --upgrade pip.\nQ: Import fails on Windows: ImportError: DLL load failed: The specified module could not be found.?\nA: If the import fails on Windows, make sure you have Visual C++ redistributable 2015 installed. If you are using older Windows version than Windows 10 and latest system updates are not installed, Universal C Runtime might be also required.\nWindows N and KN editions do not include Media Feature Pack which is required by OpenCV. If you are using Windows N or KN edition, please install also Windows Media Feature Pack.\nIf you have Windows Server 2012+, media DLLs are probably missing too; please install the Feature called \"Media Foundation\" in the Server Manager. Beware, some posts advise to install \"Windows Server Essentials Media Pack\", but this one requires the \"Windows Server Essentials Experience\" role, and this role will deeply affect your Windows Server configuration (by enforcing active directory integration etc.); so just installing the \"Media Foundation\" should be a safer choice.\nIf the above does not help, check if you are using Anaconda. Old Anaconda versions have a bug which causes the error, see this issue for a manual fix.\nIf you still encounter the error after you have checked all the previous solutions, download Dependencies and open the cv2.pyd (located usually at C:\\Users\\username\\AppData\\Local\\Programs\\Python\\PythonXX\\Lib\\site-packages\\cv2) file with it to debug missing DLL issues.\nQ: I have some other import errors?\nA: Make sure you have removed old manual installations of OpenCV Python bindings (cv2.so or cv2.pyd in site-packages).\nQ: Function foo() or method bar() returns wrong result, throws exception or crashes interpreter. What should I do?\nA: The repository contains only OpenCV-Python package build scripts, but not OpenCV itself. Python bindings for OpenCV are developed in official OpenCV repository and it's the best place to report issues. Also please check OpenCV wiki and the official OpenCV forum before file new bugs.\nQ: Why the packages do not include non-free algorithms?\nA: Non-free algorithms such as SURF are not included in these packages because they are patented / non-free and therefore cannot be distributed as built binaries. Note that SIFT is included in the builds due to patent expiration since OpenCV versions 4.3.0 and 3.4.10. See this issue for more info: #126\nQ: Why the package and import are different (opencv-python vs. cv2)?\nA: It's easier for users to understand opencv-python than cv2 and it makes it easier to find the package with search engines. cv2 (old interface in old OpenCV versions was named as cv) is the name that OpenCV developers chose when they created the binding generators. This is kept as the import name to be consistent with different kind of tutorials around the internet. Changing the import name or behaviour would be also confusing to experienced users who are accustomed to the import cv2.\nDocumentation for opencv-python\n\n\n\nThe aim of this repository is to provide means to package each new OpenCV release for the most used Python versions and platforms.\nCI build process\nThe project is structured like a normal Python package with a standard setup.py file.\nThe build process for a single entry in the build matrices is as follows (see for example .github/workflows/build_wheels_linux.yml file):\n\n\nIn Linux and MacOS build: get OpenCV's optional C dependencies that we compile against\n\n\nCheckout repository and submodules\n\nOpenCV is included as submodule and the version is updated\nmanually by maintainers when a new OpenCV release has been made\nContrib modules are also included as a submodule\n\n\n\nFind OpenCV version from the sources\n\n\nBuild OpenCV\n\ntests are disabled, otherwise build time increases too much\nthere are 4 build matrix entries for each build combination: with and without contrib modules, with and without GUI (headless)\nLinux builds run in manylinux Docker containers (CentOS 5)\nsource distributions are separate entries in the build matrix\n\n\n\nRearrange OpenCV's build result, add our custom files and generate wheel\n\n\nLinux and macOS wheels are transformed with auditwheel and delocate, correspondingly\n\n\nInstall the generated wheel\n\n\nTest that Python can import the library and run some sanity checks\n\n\nUse twine to upload the generated wheel to PyPI (only in release builds)\n\n\nSteps 1--4 are handled by pip wheel.\nThe build can be customized with environment variables. In addition to any variables that OpenCV's build accepts, we recognize:\n\nCI_BUILD. Set to 1 to emulate the CI environment build behaviour. Used only in CI builds to force certain build flags on in setup.py. Do not use this unless you know what you are doing.\nENABLE_CONTRIB and ENABLE_HEADLESS. Set to 1 to build the contrib and/or headless version\nENABLE_JAVA, Set to 1 to enable the Java client build.  This is disabled by default.\nCMAKE_ARGS. Additional arguments for OpenCV's CMake invocation. You can use this to make a custom build.\n\nSee the next section for more info about manual builds outside the CI environment.\nManual builds\nIf some dependency is not enabled in the pre-built wheels, you can also run the build locally to create a custom wheel.\n\nClone this repository: git clone --recursive https://github.com/opencv/opencv-python.git\ncd opencv-python\n\nyou can use git to checkout some other version of OpenCV in the opencv and opencv_contrib submodules if needed\n\n\nAdd custom Cmake flags if needed, for example: export CMAKE_ARGS=\"-DSOME_FLAG=ON -DSOME_OTHER_FLAG=OFF\" (in Windows you need to set environment variables differently depending on Command Line or PowerShell)\nSelect the package flavor which you wish to build with ENABLE_CONTRIB and ENABLE_HEADLESS: i.e. export ENABLE_CONTRIB=1 if you wish to build opencv-contrib-python\nRun pip wheel . --verbose. NOTE: make sure you have the latest pip version, the pip wheel command replaces the old python setup.py bdist_wheel command which does not support pyproject.toml.\n\nthis might take anything from 5 minutes to over 2 hours depending on your hardware\n\n\nPip will print fresh will location at the end of build procedure. If you use old approach with setup.py file wheel package will be placed in dist folder. Package is ready and you can do with that whatever you wish.\n\nOptional: on Linux use some of the manylinux images as a build hosts if maximum portability is needed and run auditwheel for the wheel after build\nOptional: on macOS use delocate (same as auditwheel but for macOS) for better portability\n\n\n\nManual debug builds\nIn order to build opencv-python in an unoptimized debug build, you need to side-step the normal process a bit.\n\nInstall the packages scikit-build and numpy via pip.\nRun the command python setup.py bdist_wheel --build-type=Debug.\nInstall the generated wheel file in the dist/ folder with pip install dist/wheelname.whl.\n\nIf you would like the build produce all compiler commands, then the following combination of flags and environment variables has been tested to work on Linux:\nexport CMAKE_ARGS='-DCMAKE_VERBOSE_MAKEFILE=ON'\nexport VERBOSE=1\n\npython3 setup.py bdist_wheel --build-type=Debug\n\nSee this issue for more discussion: #424\nSource distributions\nSince OpenCV version 4.3.0, also source distributions are provided in PyPI. This means that if your system is not compatible with any of the wheels in PyPI, pip will attempt to build OpenCV from sources. If you need a OpenCV version which is not available in PyPI as a source distribution, please follow the manual build guidance above instead of this one.\nYou can also force pip to build the wheels from the source distribution. Some examples:\n\npip install --no-binary opencv-python opencv-python\npip install --no-binary :all: opencv-python\n\nIf you need contrib modules or headless version, just change the package name (step 4 in the previous section is not needed). However, any additional CMake flags can be provided via environment variables as described in step 3 of the manual build section. If none are provided, OpenCV's CMake scripts will attempt to find and enable any suitable dependencies. Headless distributions have hard coded CMake flags which disable all possible GUI dependencies.\nOn slow systems such as Raspberry Pi the full build may take several hours. On a 8-core Ryzen 7 3700X the build takes about 6 minutes.\nLicensing\nOpencv-python package (scripts in this repository) is available under MIT license.\nOpenCV itself is available under Apache 2 license.\nThird party package licenses are at LICENSE-3RD-PARTY.txt.\nAll wheels ship with FFmpeg licensed under the LGPLv2.1.\nNon-headless Linux wheels ship with Qt 5 licensed under the LGPLv3.\nThe packages include also other binaries. Full list of licenses can be found from LICENSE-3RD-PARTY.txt.\nVersioning\nfind_version.py script searches for the version information from OpenCV sources and appends also a revision number specific to this repository to the version string. It saves the version information to version.py file under cv2 in addition to some other flags.\nReleases\nA release is made and uploaded to PyPI when a new tag is pushed to master branch. These tags differentiate packages (this repo might have modifications but OpenCV version stays same) and should be incremented sequentially. In practice, release version numbers look like this:\ncv_major.cv_minor.cv_revision.package_revision e.g. 3.1.0.0\nThe master branch follows OpenCV master branch releases. 3.4 branch follows OpenCV 3.4 bugfix releases.\nDevelopment builds\nEvery commit to the master branch of this repo will be built. Possible build artifacts use local version identifiers:\ncv_major.cv_minor.cv_revision+git_hash_of_this_repo e.g. 3.1.0+14a8d39\nThese artifacts can't be and will not be uploaded to PyPI.\nManylinux wheels\nLinux wheels are built using manylinux2014. These wheels should work out of the box for most of the distros (which use GNU C standard library) out there since they are built against an old version of glibc.\nThe default manylinux2014 images have been extended with some OpenCV dependencies. See Docker folder for more info.\nSupported Python versions\nPython 3.x compatible pre-built wheels are provided for the officially supported Python versions (not in EOL):\n\n3.7\n3.8\n3.9\n3.10\n3.11\n\nBackward compatibility\nStarting from 4.2.0 and 3.4.9 builds the macOS Travis build environment was updated to XCode 9.4. The change effectively dropped support for older than 10.13 macOS versions.\nStarting from 4.3.0 and 3.4.10 builds the Linux build environment was updated from manylinux1 to manylinux2014. This dropped support for old Linux distributions.\nStarting from version 4.7.0 the Mac OS GitHub Actions build environment was update to version 11. Mac OS 10.x support depricated. See actions/runner-images#5583\n\n\n", "category": "Image processing"}, {"name": "olefile", "readme": "\n  \n   \nolefile is a Python package to\nparse, read and write Microsoft OLE2\nfiles\n(also called Structured Storage, Compound File Binary Format or Compound\nDocument File Format), such as Microsoft Office 97-2003 documents,\nvbaProject.bin in MS Office 2007+ files, Image Composer and FlashPix\nfiles, Outlook messages, StickyNotes, several Microscopy file formats,\nMcAfee antivirus quarantine files, etc.\nQuick links: Home page -\nDownload/Install\n- Documentation - Report\nIssues/Suggestions/Questions\n- Contact the author -\nRepository - Updates on\nTwitter\n\nNews\nFollow all updates and news on Twitter: https://twitter.com/decalage2\n\n2018-09-09 v0.46: OleFileIO can now be used as a context manager\n(with\u2026as), to close the file automatically (see\ndoc).\nImproved handling of malformed files, fixed several bugs.\n2018-01-24 v0.45: olefile can now overwrite streams of any size,\nimproved handling of malformed files, fixed several\nbugs,\nend of support for Python 2.6 and 3.3.\n2017-01-06 v0.44: several bugfixes, removed support for Python 2.5\n(olefile2), added support for incomplete streams and incorrect\ndirectory entries (to read malformed documents), added getclsid,\nimproved documentation\nwith API reference.\n2017-01-04: moved the documentation to\nReadTheDocs\n2016-05-20: moved olefile repository to\nGitHub\n2016-02-02 v0.43: fixed issues\n#26 and\n#27, better\nhandling of malformed files, use python logging.\nsee\nchangelog\nfor more detailed information and the latest changes.\n\n\n\nDownload/Install\nIf you have pip or setuptools installed (pip is included in Python\n2.7.9+), you may simply run pip install olefile or easy_install\nolefile for the first installation.\nTo update olefile, run pip install -U olefile.\nOtherwise, see http://olefile.readthedocs.io/en/latest/Install.html\n\n\nFeatures\n\nParse, read and write any OLE file such as Microsoft Office 97-2003\nlegacy document formats (Word .doc, Excel .xls, PowerPoint .ppt,\nVisio .vsd, Project .mpp), Image Composer and FlashPix files, Outlook\nmessages, StickyNotes, Zeiss AxioVision ZVI files, Olympus FluoView\nOIB files, etc\nList all the streams and storages contained in an OLE file\nOpen streams as files\nParse and read property streams, containing metadata of the file\nPortable, pure Python module, no dependency\n\nolefile can be used as an independent package or with PIL/Pillow.\nolefile is mostly meant for developers. If you are looking for tools to\nanalyze OLE files or to extract data (especially for security purposes\nsuch as malware analysis and forensics), then please also check my\npython-oletools, which\nare built upon olefile and provide a higher-level interface.\n\n\nDocumentation\nPlease see the online\ndocumentation for more\ninformation.\n\n\nReal-life examples\nA real-life example: using OleFileIO_PL for malware analysis and\nforensics.\nSee also this\npaper\nabout python tools for forensics, which features olefile.\n\n\nLicense\nolefile (formerly OleFileIO_PL) is copyright (c) 2005-2018 Philippe\nLagadec (https://www.decalage.info)\nAll rights reserved.\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are\nmet:\n\nRedistributions of source code must retain the above copyright\nnotice, this list of conditions and the following disclaimer.\nRedistributions in binary form must reproduce the above copyright\nnotice, this list of conditions and the following disclaimer in the\ndocumentation and/or other materials provided with the distribution.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \u201cAS\nIS\u201d AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED\nTO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A\nPARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\nHOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\nSPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED\nTO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\nPROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\nLIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\nNEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nolefile is based on source code from the OleFileIO module of the Python\nImaging Library (PIL) published by Fredrik Lundh under the following\nlicense:\nThe Python Imaging Library (PIL) is\n\nCopyright (c) 1997-2009 by Secret Labs AB\nCopyright (c) 1995-2009 by Fredrik Lundh\n\nBy obtaining, using, and/or copying this software and/or its associated\ndocumentation, you agree that you have read, understood, and will comply\nwith the following terms and conditions:\nPermission to use, copy, modify, and distribute this software and its\nassociated documentation for any purpose and without fee is hereby\ngranted, provided that the above copyright notice appears in all copies,\nand that both that copyright notice and this permission notice appear in\nsupporting documentation, and that the name of Secret Labs AB or the\nauthor not be used in advertising or publicity pertaining to\ndistribution of the software without specific, written prior permission.\nSECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO\nTHIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND\nFITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR BE LIABLE FOR\nANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER\nRESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF\nCONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN\nCONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n\n", "description": "Parse, read and write Microsoft OLE2 files"}, {"name": "odfpy", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nODFPY\nINSTALLATION\nRUNNING TESTS\nREDISTRIBUTION LICENSE\n\n\n\n\n\nREADME.md\n\n\n\n\nODFPY\nThis is a collection of utility programs written in Python to manipulate\nOpenDocument 1.2 files.\nHow to proceed: Each application has its own directory. In there, look\nat the manual pages. The Python-based tools need the odf library. Just\nmake a symbolic link like this: ln -s ../odf odf\n... or type: make\nFor your own use of the odf library, see api-for-odfpy.odt\nINSTALLATION\nFirst you get the package.\n$ git clone https://github.com/eea/odfpy.git\n\nThen you can build and install the library for Python2 and Python3:\n$ python setup.py build\n$ python3 setup.py build\n$ su\n# python setup.py install\n# python3 setup.py install\n\nThe library is incompatible with PyXML.\nRUNNING TESTS\nInstall tox via pip when running the tests for the first time:\n$ pip install tox\n\nRun the tests for all supported python versions:\n$ tox\n\nREDISTRIBUTION LICENSE\nThis project, with the exception of the OpenDocument schemas, are\nCopyright (C) 2006-2014, Daniel Carrera, Alex Hudson, S\u00f8ren Roug,\nThomas Zander, Roman Fordinal, Michael Howitz and Georges Khaznadar.\nIt is distributed under both GNU General Public License v.2 or (at\nyour option) any later version or APACHE License v.2.\nSee GPL-LICENSE-2.txt and APACHE-LICENSE-2.0.txt.\nThe OpenDocument RelaxNG Schemas are Copyright \u00a9 OASIS Open 2005. See\nthe schema files for their copyright notice.\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n\n\n", "description": "Read and write OpenDocument format files"}, {"name": "numpy", "readme": "\n\n\n\n\n\n\n\n\n\nNumPy is the fundamental package for scientific computing with Python.\n\nWebsite: https://www.numpy.org\nDocumentation: https://numpy.org/doc\nMailing list: https://mail.python.org/mailman/listinfo/numpy-discussion\nSource code: https://github.com/numpy/numpy\nContributing: https://www.numpy.org/devdocs/dev/index.html\nBug reports: https://github.com/numpy/numpy/issues\nReport a security vulnerability: https://tidelift.com/docs/security\n\nIt provides:\n\na powerful N-dimensional array object\nsophisticated (broadcasting) functions\ntools for integrating C/C++ and Fortran code\nuseful linear algebra, Fourier transform, and random number capabilities\n\nTesting:\nNumPy requires pytest and hypothesis.  Tests can then be run after installation with:\npython -c \"import numpy, sys; sys.exit(numpy.test() is False)\"\n\nCode of Conduct\nNumPy is a community-driven open source project developed by a diverse group of\ncontributors. The NumPy leadership has made a strong\ncommitment to creating an open, inclusive, and positive community. Please read the\nNumPy Code of Conduct for guidance on how to interact\nwith others in a way that makes our community thrive.\nCall for Contributions\nThe NumPy project welcomes your expertise and enthusiasm!\nSmall improvements or fixes are always appreciated. If you are considering larger contributions\nto the source code, please contact us through the mailing\nlist first.\nWriting code isn\u2019t the only way to contribute to NumPy. You can also:\n\nreview pull requests\nhelp us stay on top of new and old issues\ndevelop tutorials, presentations, and other educational materials\nmaintain and improve our website\ndevelop graphic design for our brand assets and promotional materials\ntranslate website content\nhelp with outreach and onboard new contributors\nwrite grant proposals and help with other fundraising efforts\n\nFor more information about the ways you can contribute to NumPy, visit our website.\nIf you\u2019re unsure where to start or how your skills fit in, reach out! You can\nask on the mailing list or here, on GitHub, by opening a new issue or leaving a\ncomment on a relevant issue that is already open.\nOur preferred channels of communication are all public, but if you\u2019d like to\nspeak to us in private first, contact our community coordinators at\nnumpy-team@googlegroups.com or on Slack (write numpy-team@googlegroups.com for\nan invitation).\nWe also have a biweekly community call, details of which are announced on the\nmailing list. You are very welcome to join.\nIf you are new to contributing to open source, this\nguide helps explain why, what,\nand how to successfully get involved.\n", "description": "Fundamental package for array computing in Python.", "category": "Data analysis/science"}, {"name": "numpy-financial", "readme": "\nNumPy Financial\nThe numpy-financial package contains a collection of elementary financial\nfunctions.\nThe financial functions in NumPy\nare deprecated and eventually will be removed from NumPy; see\nNEP-32\nfor more information.  This package is the replacement for the original\nNumPy financial functions.\nThe source code for this package is available at https://github.com/numpy/numpy-financial.\nThe importable name of the package is numpy_financial.  The recommended\nalias is npf.  For example,\n>>> import numpy_financial as npf\n>>> npf.irr([-250000, 100000, 150000, 200000, 250000, 300000])\n0.5672303344358536\n\n"}, {"name": "numexpr", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nNumExpr: Fast numerical expression evaluator for NumPy\nWhat is NumExpr?\nHow NumExpr achieves high performance\nInstallation\nFrom wheels\nFrom Source\nEnable Intel\u00ae MKL support\nUsage\nDocumentation\nAuthors\nLicense\n\n\n\n\n\nREADME.rst\n\n\n\n\nNumExpr: Fast numerical expression evaluator for NumPy\n\n\nAuthor:\nDavid M. Cooke, Francesc Alted, and others.\nMaintainer:Robert A. McLeod\n\nContact:\nrobbmcleod@gmail.com\nURL:https://github.com/pydata/numexpr\n\nDocumentation:http://numexpr.readthedocs.io/en/latest/\n\nTravis CI:\n\nGitHub Actions:\n\n\nPyPi:\n\nDOI:\n\n\nreadthedocs:\n\n\n\n\nWhat is NumExpr?\nNumExpr is a fast numerical expression evaluator for NumPy.  With it,\nexpressions that operate on arrays (like '3*a+4*b') are accelerated\nand use less memory than doing the same calculation in Python.\nIn addition, its multi-threaded capabilities can make use of all your\ncores -- which generally results in substantial performance scaling compared\nto NumPy.\nLast but not least, numexpr can make use of Intel's VML (Vector Math\nLibrary, normally integrated in its Math Kernel Library, or MKL).\nThis allows further acceleration of transcendent expressions.\n\nHow NumExpr achieves high performance\nThe main reason why NumExpr achieves better performance than NumPy is\nthat it avoids allocating memory for intermediate results. This\nresults in better cache utilization and reduces memory access in\ngeneral. Due to this, NumExpr works best with large arrays.\nNumExpr parses expressions into its own op-codes that are then used by\nan integrated computing virtual machine. The array operands are split\ninto small chunks that easily fit in the cache of the CPU and passed\nto the virtual machine. The virtual machine then applies the\noperations on each chunk. It's worth noting that all temporaries and\nconstants in the expression are also chunked. Chunks are distributed among\nthe available cores of the CPU, resulting in highly parallelized code\nexecution.\nThe result is that NumExpr can get the most of your machine computing\ncapabilities for array-wise computations. Common speed-ups with regard\nto NumPy are usually between 0.95x (for very simple expressions like\n'a + 1') and 4x (for relatively complex ones like 'a*b-4.1*a > 2.5*b'),\nalthough much higher speed-ups can be achieved for some functions  and complex\nmath operations (up to 15x in some cases).\nNumExpr performs best on matrices that are too large to fit in L1 CPU cache.\nIn order to get a better idea on the different speed-ups that can be achieved\non your platform, run the provided benchmarks.\n\nInstallation\n\nFrom wheels\nNumExpr is available for install via pip for a wide range of platforms and\nPython versions (which may be browsed at: https://pypi.org/project/numexpr/#files).\nInstallation can be performed as:\npip install numexpr\n\nIf you are using the Anaconda or Miniconda distribution of Python you may prefer\nto use the conda package manager in this case:\nconda install numexpr\n\n\nFrom Source\nOn most *nix systems your compilers will already be present. However if you\nare using a virtual environment with a substantially newer version of Python than\nyour system Python you may be prompted to install a new version of gcc or clang.\nFor Windows, you will need to install the Microsoft Visual C++ Build Tools\n(which are free) first. The version depends on which version of Python you have\ninstalled:\nhttps://wiki.python.org/moin/WindowsCompilers\nFor Python 3.6+ simply installing the latest version of MSVC build tools should\nbe sufficient. Note that wheels found via pip do not include MKL support. Wheels\navailable via conda will have MKL, if the MKL backend is used for NumPy.\nSee requirements.txt for the required version of NumPy.\nNumExpr is built in the standard Python way:\npython setup.py build install\n\nYou can test numexpr with:\npython -c \"import numexpr; numexpr.test()\"\n\nDo not test NumExpr in the source directory or you will generate import errors.\n\nEnable Intel\u00ae MKL support\nNumExpr includes support for Intel's MKL library. This may provide better\nperformance on Intel architectures, mainly when evaluating transcendental\nfunctions (trigonometrical, exponential, ...).\nIf you have Intel's MKL, copy the site.cfg.example that comes with the\ndistribution to site.cfg and edit the latter file to provide correct paths to\nthe MKL libraries in your system.  After doing this, you can proceed with the\nusual building instructions listed above.\nPay attention to the messages during the building process in order to know\nwhether MKL has been detected or not.  Finally, you can check the speed-ups on\nyour machine by running the bench/vml_timing.py script (you can play with\ndifferent parameters to the set_vml_accuracy_mode() and set_vml_num_threads()\nfunctions in the script so as to see how it would affect performance).\n\nUsage\n>>> import numpy as np\n>>> import numexpr as ne\n\n>>> a = np.arange(1e6)   # Choose large arrays for better speedups\n>>> b = np.arange(1e6)\n\n>>> ne.evaluate(\"a + 1\")   # a simple expression\narray([  1.00000000e+00,   2.00000000e+00,   3.00000000e+00, ...,\n         9.99998000e+05,   9.99999000e+05,   1.00000000e+06])\n\n>>> ne.evaluate('a*b-4.1*a > 2.5*b')   # a more complex one\narray([False, False, False, ...,  True,  True,  True], dtype=bool)\n\n>>> ne.evaluate(\"sin(a) + arcsinh(a/b)\")   # you can also use functions\narray([        NaN,  1.72284457,  1.79067101, ...,  1.09567006,\n        0.17523598, -0.09597844])\n\n>>> s = np.array([b'abba', b'abbb', b'abbcdef'])\n>>> ne.evaluate(\"b'abba' == s\")   # string arrays are supported too\narray([ True, False, False], dtype=bool)\n\n\nDocumentation\nPlease see the official documentation at numexpr.readthedocs.io.\nIncluded is a user guide, benchmark results, and the reference API.\n\nAuthors\nPlease see AUTHORS.txt.\n\nLicense\nNumExpr is distributed under the MIT license.\n\n\n", "description": "Fast numerical expression evaluator for NumPy."}, {"name": "numba", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n", "description": "JIT compiler for Python array expressions and functions using LLVM."}, {"name": "notebook", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nJupyter Notebook\nMaintained versions\nNotebook v7\nClassic Notebook v6\nJupyter notebook, the language-agnostic evolution of IPython notebook\nInstallation\nUsage - Running Jupyter notebook\nRunning in a local installation\nRunning in a remote installation\nDevelopment Installation\nContributing\nCommunity Guidelines and Code of Conduct\nResources\nAbout the Jupyter Development Team\nOur Copyright Policy\n\n\n\n\n\nREADME.md\n\n\n\n\nJupyter Notebook\n\n\n\n\nThe Jupyter notebook is a web-based notebook environment for interactive\ncomputing.\n\nMaintained versions\nWe maintain the two most recently released major versions of Jupyter Notebook, Notebook v5 and Classic Notebook v6. After Notebook v7.0 is released, we will no longer maintain Notebook v5. All Notebook v5 users are strongly advised to upgrade to Classic Notebook v6 as soon as possible.\nThe Jupyter Notebook project is currently undertaking a transition to a more modern code base built from the ground-up using JupyterLab components and extensions.\nThere is new stream of work which was submitted and then accepted as a Jupyter Enhancement Proposal (JEP) as part of the next version (v7): https://jupyter.org/enhancement-proposals/79-notebook-v7/notebook-v7.html\nThere is also a plan to continue maintaining Notebook v6 with bug and security fixes only, to ease the transition to Notebook v7: jupyter/notebook-team-compass#5 (comment)\nNotebook v7\nThe newest major version of Notebook is based on:\n\nJupyterLab components for the frontend\nJupyter Server for the Python server\n\nThis represents a significant change to the jupyter/notebook code base.\nTo learn more about Notebook v7: https://jupyter.org/enhancement-proposals/79-notebook-v7/notebook-v7.html\nClassic Notebook v6\nMaintainance and security-related issues are now being addressed in the 6.4.x branch.\nA 6.5.x branch will be soon created and will depend on nbclassic for the HTML/JavaScript/CSS assets.\nNew features and continuous improvement is now focused on Notebook v7 (see section above).\nIf you have an open pull request with a new feature or if you were planning to open one, we encourage switching over to the Jupyter Server and JupyterLab architecture, and distribute it as a server extension and / or JupyterLab prebuilt extension. That way your new feature will also be compatible with the new Notebook v7.\nJupyter notebook, the language-agnostic evolution of IPython notebook\nJupyter notebook is a language-agnostic HTML notebook application for\nProject Jupyter. In 2015, Jupyter notebook was released as a part of\nThe Big Split\u2122 of the IPython codebase. IPython 3 was the last major monolithic\nrelease containing both language-agnostic code, such as the IPython notebook,\nand language specific code, such as the IPython kernel for Python. As\ncomputing spans across many languages, Project Jupyter will continue to develop the\nlanguage-agnostic Jupyter notebook in this repo and with the help of the\ncommunity develop language specific kernels which are found in their own\ndiscrete repos.\n\nThe Big Split\u2122 announcement\nJupyter Ascending blog post\n\nInstallation\nYou can find the installation documentation for the\nJupyter platform, on ReadTheDocs.\nThe documentation for advanced usage of Jupyter notebook can be found\nhere.\nFor a local installation, make sure you have\npip installed and run:\npip install notebook\nUsage - Running Jupyter notebook\nRunning in a local installation\nLaunch with:\njupyter notebook\nRunning in a remote installation\nYou need some configuration before starting Jupyter notebook remotely. See Running a notebook server.\nDevelopment Installation\nSee CONTRIBUTING.md for how to set up a local development installation.\nContributing\nIf you are interested in contributing to the project, see CONTRIBUTING.md.\nCommunity Guidelines and Code of Conduct\nThis repository is a Jupyter project and follows the Jupyter\nCommunity Guides and Code of Conduct.\nResources\n\nProject Jupyter website\nOnline Demo at jupyter.org/try\nDocumentation for Jupyter notebook\nKorean Version of Installation\nDocumentation for Project Jupyter [PDF]\nIssues\nTechnical support - Jupyter Google Group\n\nAbout the Jupyter Development Team\nThe Jupyter Development Team is the set of all contributors to the Jupyter project.\nThis includes all of the Jupyter subprojects.\nThe core team that coordinates development on GitHub can be found here:\nhttps://github.com/jupyter/.\nOur Copyright Policy\nJupyter uses a shared copyright model. Each contributor maintains copyright\nover their contributions to Jupyter. But, it is important to note that these\ncontributions are typically only changes to the repositories. Thus, the Jupyter\nsource code, in its entirety is not the copyright of any single person or\ninstitution. Instead, it is the collective copyright of the entire Jupyter\nDevelopment Team. If individual contributors want to maintain a record of what\nchanges/contributions they have specific copyright on, they should indicate\ntheir copyright in the commit message of the change, when they commit the\nchange to one of the Jupyter repositories.\nWith this in mind, the following banner should be used in any source code file\nto indicate the copyright and license terms:\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\n\n\n", "description": "shim - Jupyter Server extension providing v6 API for nbclassic"}, {"name": "notebook-shim", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nJupyter Server\nInstallation and Basic usage\nVersioning and Branches\nUsage - Running Jupyter Server\nRunning in a local installation\nTesting\nContributing\nTeam Meetings and Roadmap\nAbout the Jupyter Development Team\nOur Copyright Policy\n\n\n\n\n\nREADME.md\n\n\n\n\nJupyter Server\n\n\nThe Jupyter Server provides the backend (i.e. the core services, APIs, and REST endpoints) for Jupyter web applications like Jupyter notebook, JupyterLab, and Voila.\nFor more information, read our documentation here.\nInstallation and Basic usage\nTo install the latest release locally, make sure you have\npip installed and run:\npip install jupyter_server\n\nJupyter Server currently supports Python>=3.6 on Linux, OSX and Windows.\nVersioning and Branches\nIf Jupyter Server is a dependency of your project/application, it is important that you pin it to a version that works for your application. Currently, Jupyter Server only has minor and patch versions. Different minor versions likely include API-changes while patch versions do not change API.\nWhen a new minor version is released on PyPI, a branch for that version will be created in this repository, and the version of the main branch will be bumped to the next minor version number. That way, the main branch always reflects the latest un-released version.\nTo see the changes between releases, checkout the CHANGELOG.\nUsage - Running Jupyter Server\nRunning in a local installation\nLaunch with:\njupyter server\n\nTesting\nSee CONTRIBUTING.\nContributing\nIf you are interested in contributing to the project, see CONTRIBUTING.rst.\nTeam Meetings and Roadmap\n\nWhen: Thursdays 8:00am, Pacific time\nWhere: Jovyan Zoom\nWhat: Meeting notes\n\nSee our tentative roadmap here.\nAbout the Jupyter Development Team\nThe Jupyter Development Team is the set of all contributors to the Jupyter project.\nThis includes all of the Jupyter subprojects.\nThe core team that coordinates development on GitHub can be found here:\nhttps://github.com/jupyter/.\nOur Copyright Policy\nJupyter uses a shared copyright model. Each contributor maintains copyright\nover their contributions to Jupyter. But, it is important to note that these\ncontributions are typically only changes to the repositories. Thus, the Jupyter\nsource code, in its entirety is not the copyright of any single person or\ninstitution. Instead, it is the collective copyright of the entire Jupyter\nDevelopment Team. If individual contributors want to maintain a record of what\nchanges/contributions they have specific copyright on, they should indicate\ntheir copyright in the commit message of the change, when they commit the\nchange to one of the Jupyter repositories.\nWith this in mind, the following banner should be used in any source code file\nto indicate the copyright and license terms:\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\n\n\n"}, {"name": "nltk", "readme": "\nThe Natural Language Toolkit (NLTK) is a Python package for\nnatural language processing.  NLTK requires Python 3.7, 3.8, 3.9, 3.10 or 3.11.\n", "description": "Natural language toolkit for Python.", "category": "Natural language processing"}, {"name": "networkx", "readme": "\nNetworkX Survey 2023!! \ud83c\udf89 Fill out the survey to tell us about your ideas, complaints, praises of NetworkX!\n\n\n\nNetworkX is a Python package for the creation, manipulation,\nand study of the structure, dynamics, and functions\nof complex networks.\n\nWebsite (including documentation): https://networkx.org\nMailing list: https://groups.google.com/forum/#!forum/networkx-discuss\nSource: https://github.com/networkx/networkx\nBug reports: https://github.com/networkx/networkx/issues\nReport a security vulnerability: https://tidelift.com/security\nTutorial: https://networkx.org/documentation/latest/tutorial.html\nGitHub Discussions: https://github.com/networkx/networkx/discussions\n\n\nSimple example\nFind the shortest path between two nodes in an undirected graph:\n>>> import networkx as nx\n>>> G = nx.Graph()\n>>> G.add_edge(\"A\", \"B\", weight=4)\n>>> G.add_edge(\"B\", \"D\", weight=2)\n>>> G.add_edge(\"A\", \"C\", weight=3)\n>>> G.add_edge(\"C\", \"D\", weight=4)\n>>> nx.shortest_path(G, \"A\", \"D\", weight=\"weight\")\n['A', 'B', 'D']\n\n\nInstall\nInstall the latest version of NetworkX:\n$ pip install networkx\nInstall with all optional dependencies:\n$ pip install networkx[all]\nFor additional details, please see INSTALL.rst.\n\n\nBugs\nPlease report any bugs that you find here.\nOr, even better, fork the repository on GitHub\nand create a pull request (PR). We welcome all changes, big or small, and we\nwill help you make the PR if you are new to git (just ask on the issue and/or\nsee CONTRIBUTING.rst).\n\n\nLicense\nReleased under the 3-Clause BSD license (see LICENSE.txt):\nCopyright (C) 2004-2023 NetworkX Developers\nAric Hagberg <hagberg@lanl.gov>\nDan Schult <dschult@colgate.edu>\nPieter Swart <swart@lanl.gov>\n\n", "description": "Library for graph and network analysis."}, {"name": "nest-asyncio", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nIntroduction\nInstallation\nUsage\n\n\n\n\n\nREADME.rst\n\n\n\n\n \n\n \n\n\nIntroduction\nBy design asyncio does not allow\nits event loop to be nested. This presents a practical problem:\nWhen in an environment where the event loop is\nalready running it's impossible to run tasks and wait\nfor the result. Trying to do so will give the error\n\"RuntimeError: This event loop is already running\".\nThe issue pops up in various environments, such as web servers,\nGUI applications and in Jupyter notebooks.\nThis module patches asyncio to allow nested use of asyncio.run and\nloop.run_until_complete.\n\nInstallation\npip3 install nest_asyncio\n\nPython 3.5 or higher is required.\n\nUsage\nimport nest_asyncio\nnest_asyncio.apply()\nOptionally the specific loop that needs patching can be given\nas argument to apply, otherwise the current event loop is used.\nAn event loop can be patched whether it is already running\nor not. Only event loops from asyncio can be patched;\nLoops from other projects, such as uvloop or quamash,\ngenerally can't be patched.\n\n\n"}, {"name": "nbformat", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n", "description": "Jupyter notebook format reader/writer"}, {"name": "nbconvert", "readme": "\nnbconvert\nJupyter Notebook Conversion\n\n\nThe nbconvert tool, jupyter nbconvert, converts notebooks to various other\nformats via Jinja templates. The nbconvert tool allows you to convert an\n.ipynb notebook file into various static formats including:\n\nHTML\nLaTeX\nPDF\nReveal JS\nMarkdown (md)\nReStructured Text (rst)\nexecutable script\n\nUsage\nFrom the command line, use nbconvert to convert a Jupyter notebook (input) to a\na different format (output). The basic command structure is:\n$ jupyter nbconvert --to <output format> <input notebook>\n\nwhere <output format> is the desired output format and <input notebook> is the\nfilename of the Jupyter notebook.\nExample: Convert a notebook to HTML\nConvert Jupyter notebook file, mynotebook.ipynb, to HTML using:\n$ jupyter nbconvert --to html mynotebook.ipynb\n\nThis command creates an HTML output file named mynotebook.html.\nDev Install\nCheck if pandoc is installed (pandoc --version); if needed, install:\nsudo apt-get install pandoc\n\nOr\nbrew install pandoc\n\nInstall nbconvert for development using:\ngit clone https://github.com/jupyter/nbconvert.git\ncd nbconvert\npip install -e .\n\nRunning the tests after a dev install above:\npip install nbconvert[test]\npy.test --pyargs nbconvert\n\nDocumentation\n\nDocumentation for Jupyter nbconvert\n[PDF]\nnbconvert examples on GitHub\nDocumentation for Project Jupyter\n[PDF]\n\nTechnical Support\n\nIssues and Bug Reports: A place to report\nbugs or regressions found for nbconvert\nCommunity Technical Support and Discussion - Discourse: A place for\ninstallation, configuration, and troubleshooting assistannce by the Jupyter community.\nAs a non-profit project and maintainers who are primarily volunteers, we encourage you\nto ask questions and share your knowledge on Discourse.\n\nJupyter Resources\n\nJupyter mailing list\nProject Jupyter website\n\nAbout the Jupyter Development Team\nThe Jupyter Development Team is the set of all contributors to the Jupyter project.\nThis includes all of the Jupyter subprojects.\nThe core team that coordinates development on GitHub can be found here:\nhttps://github.com/jupyter/.\nOur Copyright Policy\nJupyter uses a shared copyright model. Each contributor maintains copyright\nover their contributions to Jupyter. But, it is important to note that these\ncontributions are typically only changes to the repositories. Thus, the Jupyter\nsource code, in its entirety is not the copyright of any single person or\ninstitution.  Instead, it is the collective copyright of the entire Jupyter\nDevelopment Team.  If individual contributors want to maintain a record of what\nchanges/contributions they have specific copyright on, they should indicate\ntheir copyright in the commit message of the change, when they commit the\nchange to one of the Jupyter repositories.\nWith this in mind, the following banner should be used in any source code file\nto indicate the copyright and license terms:\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\n", "description": "Converter for Jupyter notebooks"}, {"name": "nbclient", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nnbclient\nInteractive Demo\nInstallation\nDocumentation\nPython Version Support\nOrigins\nRelationship to JupyterClient\nAbout the Jupyter Development Team\nOur Copyright Policy\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\n\n\n\n\n\n\n\nnbclient\nNBClient lets you execute notebooks.\nA client library for programmatic notebook execution, NBClient is a tool for running Jupyter Notebooks in\ndifferent execution contexts, including the command line.\nInteractive Demo\nTo demo NBClient interactively, click this Binder badge to start the demo:\n\nInstallation\nIn a terminal, run:\npython3 -m pip install nbclient\n\nDocumentation\nSee ReadTheDocs for more in-depth details about the project and the\nAPI Reference.\nPython Version Support\nThis library currently supports Python 3.6+ versions. As minor Python\nversions are officially sunset by the Python org, nbclient will similarly\ndrop support in the future.\nOrigins\nThis library used to be part of the nbconvert project. NBClient extracted nbconvert's ExecutePreprocessorinto its own library for easier updating and importing by downstream libraries and applications.\nRelationship to JupyterClient\nNBClient and JupyterClient are distinct projects.\njupyter_client is a client library for the jupyter protocol. Specifically, jupyter_client provides the Python API\nfor starting, managing and communicating with Jupyter kernels.\nWhile, nbclient allows notebooks to be run in different execution contexts.\nAbout the Jupyter Development Team\nThe Jupyter Development Team is the set of all contributors to the Jupyter project.\nThis includes all of the Jupyter subprojects.\nThe core team that coordinates development on GitHub can be found here:\nhttps://github.com/jupyter/.\nOur Copyright Policy\nJupyter uses a shared copyright model. Each contributor maintains copyright\nover their contributions to Jupyter. But, it is important to note that these\ncontributions are typically only changes to the repositories. Thus, the Jupyter\nsource code, in its entirety is not the copyright of any single person or\ninstitution.  Instead, it is the collective copyright of the entire Jupyter\nDevelopment Team.  If individual contributors want to maintain a record of what\nchanges/contributions they have specific copyright on, they should indicate\ntheir copyright in the commit message of the change, when they commit the\nchange to one of the Jupyter repositories.\nWith this in mind, the following banner should be used in any source code file\nto indicate the copyright and license terms:\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\n\n\n", "description": "Executes Jupyter notebooks programmatically"}, {"name": "nbclassic", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nThe Classic Jupyter Notebook as a Jupyter Server Extension\nHow does it work?\nBasic Usage\n\n\n\n\n\nREADME.md\n\n\n\n\nThe Classic Jupyter Notebook as a Jupyter Server Extension\n\n\nRead the full NbClassic User Manual here!\nThe Jupyter Notebook is evolving to bring you big new features, but it\nwill also break backwards compatibility with many classic Jupyter Notebook\nextensions and customizations.\nNbClassic provides a backwards compatible Jupyter Notebook interface that\nyou can install side-by-side with the latest versions: That way, you can\nfearlessly upgrade without worrying about your classic extensions and\ncustomizations breaking.\nHow does it work?\nBecause NbClassic provides the classic interface on top of the new Jupyter\nServer backend, it can coexist with other frontends like JupyterLab and\nNotebook 7 in the same installation. NbClassic preserves the custom classic\nnotebook experience under a new set of URL endpoints, under the namespace\n/nbclassic/.\nBasic Usage\nInstall from PyPI:\n> pip install nbclassic\n\nThis will automatically enable the NbClassic Jupyter Server extension in Jupyter Server.\nLaunch directly:\n> jupyter nbclassic\n\nAlternatively, you can run Jupyter Server:\n> jupyter server\n\n\n\n", "description": "Jupyter classic notebook as Jupyter Server extension"}, {"name": "nashpy", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nNashpy: a python library for 2 player games.\nDocumentation\nInstallation\nUsage\nOther game theoretic software\nDevelopment\nCode of conduct\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\n\n\n\nNashpy: a python library for 2 player games.\nNashpy is:\n\nAn excellently documented library:\n\nThe Nashpy game theory text book aims\nto be a course text on the background theory.\nThe contributor\ndocumentation\naims to be a text on research software development and help first time open\nsource software contributions.\n\n\nA state of the art developed code\nbase which\naims to use the best of available tools to ensure the code is correct,\nreadable and robust.\nFeature rich, the following are implemented:\n\nSupport enumeration How to docs \ud83d\udc0d - Theory docs \ud83d\udcd8\nVertex enumeration How to docs \ud83d\udc0d - Theory docs \ud83d\udcd8\nLemke-Howson algorithm How to docs \ud83d\udc0d - Theory docs \ud83d\udcd8\nFictitious play How to docs \ud83d\udc0d - Theory docs \ud83d\udcd8\nStochastic fictitious play How to docs \ud83d\udc0d - Theory docs \ud83d\udcd8\nReplicator dynamics How to docs \ud83d\udc0d - Theory docs \ud83d\udcd8\nReplicator-mutation dynamics How to docs \ud83d\udc0d - Theory docs \ud83d\udcd8\nAsymmetric replicator dynamics How to docs \ud83d\udc0d - Theory docs \ud83d\udcd8\nMoran processes How to docs \ud83d\udc0d\nGenerate games from repeated games How to docs \ud83d\udc0d - Theory docs \ud83d\udcd8\nMoran processes on interaction graphs How to docs \ud83d\udc0d\nMoran processes on replication graphs How to docs \ud83d\udc0d\n\n\n\nDocumentation\nFull documentation is available here: http://nashpy.readthedocs.io/\nInstallation\n$ python -m pip install nashpy\nTo install Nashpy on Fedora, use:\n$ dnf install python3-nashpy\nUsage\nCreate bi matrix games by passing two 2 dimensional arrays/lists:\n>>> import nashpy as nash\n>>> A = [[1, 2], [3, 0]]\n>>> B = [[0, 2], [3, 1]]\n>>> game = nash.Game(A, B)\n>>> for eq in game.support_enumeration():\n...     print(eq)\n(array([1., 0.]), array([0., 1.]))\n(array([0., 1.]), array([1., 0.]))\n(array([0.5, 0.5]), array([0.5, 0.5]))\n>>> game[[0, 1], [1, 0]]\narray([3, 3])\nOther game theoretic software\n\nGambit is a library with a python api and\nsupport for more algorithms and more than 2 player games.\nGame theory explorer a web interface to\ngambit useful for teaching.\nAxelrod a research library aimed\nat the study of the Iterated Prisoners dilemma\n\nDevelopment\nClone the repository and create a virtual environment:\n$ git clone https://github.com/drvinceknight/nashpy.git\n$ cd nashpy\n$ python -m venv env\n\nActivate the virtual environment and install tox:\n$ source env/bin/activate\n$ python -m pip install tox\n\nMake modifications.\nTo run the tests:\n$ python -m tox\n\nTo build the documentation. First install the software which also installs the\ndocumentation build requirements.\n$ python -m pip install flit\n$ python -m flit install --symlink\nThen:\n$ cd docs\n$ make html\nFull contribution documentation is available at\nhttps://nashpy.readthedocs.io/en/latest/contributing/index.html\nPull requests are welcome.\nCode of conduct\nIn the interest of fostering an open and welcoming environment, all\ncontributors, maintainers and users are expected to abide by the Python code of\nconduct: https://www.python.org/psf/codeofconduct/\n\n\n", "description": "Game theory algorithms for 2 player games", "category": "Game theory"}, {"name": "mutagen", "readme": "\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\nMutagen is a Python module to handle audio metadata. It supports ASF, FLAC,\nMP4, Monkey's Audio, MP3, Musepack, Ogg Opus, Ogg FLAC, Ogg Speex, Ogg Theora,\nOgg Vorbis, True Audio, WavPack, OptimFROG, and AIFF audio files. All\nversions of ID3v2 are supported, and all standard ID3v2.4 frames are parsed.\nIt can read Xing headers to accurately calculate the bitrate and length of\nMP3s. ID3 and APEv2 tags can be edited regardless of audio format. It can also\nmanipulate Ogg streams on an individual packet/page level.\nMutagen works with Python 3.8+ (CPython and PyPy) on Linux, Windows and macOS,\nand has no dependencies outside the Python standard library. Mutagen is licensed\nunder the GPL version 2 or\nlater.\nFor more information visit https://mutagen.readthedocs.org\n\n\n\n\n", "description": "Read and write audio file metadata"}, {"name": "murmurhash", "readme": "\n\n\n\nREADME.md\n\n\n\n\n\nCython bindings for MurmurHash2\n\n\n\n\n\n\n", "description": "Bindings for MurmurHash2 hash algorithm."}, {"name": "munch", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nmunch\nInstallation\nUsage\nDictionary Methods\nSerialization\nDefault Values\nMiscellaneous\nFeedback\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\n\nmunch\nInstallation\npip install munch\n\nUsage\nmunch is a fork of David Schoonover's Bunch package, providing similar functionality. 99% of the work was done by him, and the fork was made mainly for lack of responsiveness for fixes and maintenance on the original code.\nMunch is a dictionary that supports attribute-style access, a la JavaScript:\n>>> from munch import Munch\n>>> b = Munch()\n>>> b.hello = 'world'\n>>> b.hello\n'world'\n>>> b['hello'] += \"!\"\n>>> b.hello\n'world!'\n>>> b.foo = Munch(lol=True)\n>>> b.foo.lol\nTrue\n>>> b.foo is b['foo']\nTrue\nDictionary Methods\nA Munch is a subclass of dict; it supports all the methods a dict does:\n>>> list(b.keys())\n['hello', 'foo']\nIncluding update():\n>>> b.update({ 'ponies': 'are pretty!' }, hello=42)\n>>> print(repr(b))\nMunch({'hello': 42, 'foo': Munch({'lol': True}), 'ponies': 'are pretty!'})\nAs well as iteration:\n>>> [ (k,b[k]) for k in b ]\n[('hello', 42), ('foo', Munch({'lol': True})), ('ponies', 'are pretty!')]\nAnd \"splats\":\n>>> \"The {knights} who say {ni}!\".format(**Munch(knights='lolcats', ni='can haz'))\n'The lolcats who say can haz!'\nSerialization\nMunches happily and transparently serialize to JSON and YAML.\n>>> b = Munch(foo=Munch(lol=True), hello=42, ponies='are pretty!')\n>>> import json\n>>> json.dumps(b)\n'{\"foo\": {\"lol\": true}, \"hello\": 42, \"ponies\": \"are pretty!\"}'\nIf JSON support is present (json or simplejson), Munch will have a toJSON() method which returns the object as a JSON string.\nIf you have PyYAML installed, Munch attempts to register itself with the various YAML Representers so that Munches can be transparently dumped and loaded.\n>>> b = Munch(foo=Munch(lol=True), hello=42, ponies='are pretty!')\n>>> import yaml\n>>> yaml.dump(b)\n'!munch.Munch\\nfoo: !munch.Munch\\n  lol: true\\nhello: 42\\nponies: are pretty!\\n'\n>>> yaml.safe_dump(b)\n'foo:\\n  lol: true\\nhello: 42\\nponies: are pretty!\\n'\nIn addition, Munch instances will have a toYAML() method that returns the YAML string using yaml.safe_dump(). This method also replaces __str__ if present, as I find it far more readable. You can revert back to Python's default use of __repr__ with a simple assignment: Munch.__str__ = Munch.__repr__. The Munch class will also have a static method Munch.fromYAML(), which loads a Munch out of a YAML string.\nFinally, Munch converts easily and recursively to (unmunchify(), Munch.toDict()) and from (munchify(), Munch.fromDict()) a normal dict, making it easy to cleanly serialize them in other formats.\nDefault Values\nDefaultMunch instances return a specific default value when an attribute is missing from the collection. Like collections.defaultdict, the first argument is the value to use for missing keys:\n>>> from munch import DefaultMunch\n>>> undefined = object()\n>>> b = DefaultMunch(undefined, {'hello': 'world!'})\n>>> b.hello\n'world!'\n>>> b.foo is undefined\nTrue\nDefaultMunch.fromDict() also takes the default argument:\n>>> undefined = object()\n>>> b = DefaultMunch.fromDict({'recursively': {'nested': 'value'}}, undefined)\n>>> b.recursively.nested == 'value'\nTrue\n>>> b.recursively.foo is undefined\nTrue\nOr you can use DefaultFactoryMunch to specify a factory for generating missing attributes. The first argument is the factory:\n>>> from munch import DefaultFactoryMunch\n>>> b = DefaultFactoryMunch(list, {'hello': 'world!'})\n>>> b.hello\n'world!'\n>>> b.foo\n[]\n>>> b.bar.append('hello')\n>>> b.bar\n['hello']\nMiscellaneous\n\nIt is safe to import * from this module. You'll get: Munch, DefaultMunch, DefaultFactoryMunch, munchify and unmunchify.\nAmple Tests. Just run pip install tox && tox from the project root.\n\nFeedback\nOpen a ticket / fork the project on GitHub.\n\n\n", "description": "Dictionary that provides attribute style access."}, {"name": "multidict", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nmultidict\nIntroduction\nLicense\nLibrary Installation\nChangelog\n\n\n\n\n\nREADME.rst\n\n\n\n\nmultidict\n\n\n\n\n\n\n\n\n\n\n\nMultidict is dict-like collection of key-value pairs where key\nmight occur more than once in the container.\n\nIntroduction\nHTTP Headers and URL query string require specific data structure:\nmultidict. It behaves mostly like a regular dict but it may have\nseveral values for the same key and preserves insertion ordering.\nThe key is str (or istr for case-insensitive dictionaries).\nmultidict has four multidict classes:\nMultiDict, MultiDictProxy, CIMultiDict\nand CIMultiDictProxy.\nImmutable proxies (MultiDictProxy and\nCIMultiDictProxy) provide a dynamic view for the\nproxied multidict, the view reflects underlying collection changes. They\nimplement the collections.abc.Mapping interface.\nRegular mutable (MultiDict and CIMultiDict) classes\nimplement collections.abc.MutableMapping and allows them to change\ntheir own content.\nCase insensitive (CIMultiDict and\nCIMultiDictProxy) assume the keys are case\ninsensitive, e.g.:\n>>> dct = CIMultiDict(key='val')\n>>> 'Key' in dct\nTrue\n>>> dct['Key']\n'val'\n\nKeys should be str or istr instances.\nThe library has optional C Extensions for speed.\n\nLicense\nApache 2\n\nLibrary Installation\n$ pip install multidict\nThe library is Python 3 only!\nPyPI contains binary wheels for Linux, Windows and MacOS.  If you want to install\nmultidict on another operating system (or Alpine Linux inside a Docker) the\ntarball will be used to compile the library from source.  It requires a C compiler and\nPython headers to be installed.\nTo skip the compilation, please use the MULTIDICT_NO_EXTENSIONS environment variable,\ne.g.:\n$ MULTIDICT_NO_EXTENSIONS=1 pip install multidict\nPlease note, the pure Python (uncompiled) version is about 20-50 times slower depending on\nthe usage scenario!!!\n\nChangelog\nSee RTD page.\n\n\n", "description": "Dictionary with multiple values per key and ordered keys."}, {"name": "mtcnn", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nMTCNN\nINSTALLATION\nUSAGE\nBENCHMARK\nMODEL\nLICENSE\nREFERENCE\n\n\n\n\n\nREADME.rst\n\n\n\n\n\nMTCNN\n\n\n\nImplementation of the MTCNN face detector for Keras in Python3.4+. It is written from scratch, using as a reference the implementation of\nMTCNN from David Sandberg (FaceNet's MTCNN) in Facenet. It is based on the paper Zhang, K et al. (2016) [ZHANG2016].\n\n\nINSTALLATION\nCurrently it is only supported Python3.4 onwards. It can be installed through pip:\n$ pip install mtcnn\nThis implementation requires OpenCV>=4.1 and Keras>=2.0.0 (any Tensorflow supported by Keras will be supported by this MTCNN package).\nIf this is the first time you use tensorflow, you will probably need to install it in your system:\n$ pip install tensorflow\nor with conda\n$ conda install tensorflow\nNote that tensorflow-gpu version can be used instead if a GPU device is available on the system, which will speedup the results.\n\nUSAGE\nThe following example illustrates the ease of use of this package:\n>>> from mtcnn import MTCNN\n>>> import cv2\n>>>\n>>> img = cv2.cvtColor(cv2.imread(\"ivan.jpg\"), cv2.COLOR_BGR2RGB)\n>>> detector = MTCNN()\n>>> detector.detect_faces(img)\n[\n    {\n        'box': [277, 90, 48, 63],\n        'keypoints':\n        {\n            'nose': (303, 131),\n            'mouth_right': (313, 141),\n            'right_eye': (314, 114),\n            'left_eye': (291, 117),\n            'mouth_left': (296, 143)\n        },\n        'confidence': 0.99851983785629272\n    }\n]\nThe detector returns a list of JSON objects. Each JSON object contains three main keys: 'box', 'confidence' and 'keypoints':\n\nThe bounding box is formatted as [x, y, width, height] under the key 'box'.\nThe confidence is the probability for a bounding box to be matching a face.\nThe keypoints are formatted into a JSON object with the keys 'left_eye', 'right_eye', 'nose', 'mouth_left', 'mouth_right'. Each keypoint is identified by a pixel position (x, y).\n\nAnother good example of usage can be found in the file \"example.py.\" located in the root of this repository. Also, you can run the Jupyter Notebook \"example.ipynb\" for another example of usage.\n\nBENCHMARK\nThe following tables shows the benchmark of this mtcnn implementation running on an Intel i7-3612QM CPU @ 2.10GHz, with a CPU-based Tensorflow 1.4.1.\n\n\nPictures containing a single frontal face:\n\n\n\n\nImage size\nTotal pixels\nProcess time\nFPS\n\n\n\n460x259\n119,140\n0.118 seconds\n8.5\n\n561x561\n314,721\n0.227 seconds\n4.5\n\n667x1000\n667,000\n0.456 seconds\n2.2\n\n1920x1200\n2,304,000\n1.093 seconds\n0.9\n\n4799x3599\n17,271,601\n8.798 seconds\n0.1\n\n\n\n\n\nPictures containing 10 frontal faces:\n\n\n\n\nImage size\nTotal pixels\nProcess time\nFPS\n\n\n\n474x224\n106,176\n0.185 seconds\n5.4\n\n736x348\n256,128\n0.290 seconds\n3.4\n\n2100x994\n2,087,400\n1.286 seconds\n0.7\n\n\n\n\nMODEL\nBy default the MTCNN bundles a face detection weights model.\nThe model is adapted from the Facenet's MTCNN implementation, merged in a single file located inside the folder 'data' relative\nto the module's path. It can be overriden by injecting it into the MTCNN() constructor during instantiation.\nThe model must be numpy-based containing the 3 main keys \"pnet\", \"rnet\" and \"onet\", having each of them the weights of each of the layers of the network.\nFor more reference about the network definition, take a close look at the paper from Zhang et al. (2016) [ZHANG2016].\n\nLICENSE\nMIT License.\n\nREFERENCE\n\n\n[ZHANG2016](1, 2) Zhang, K., Zhang, Z., Li, Z., and Qiao, Y. (2016). Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters, 23(10):1499\u20131503.\n\n\n\n\n", "description": "Face detection using multi-task cascaded convolutional neural networks"}, {"name": "mpmath", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nmpmath\n0. History and credits\n1. Download & installation\n2. Documentation\n3. Running tests\n4. Known problems\n5. Help and bug reports\n\n\n\n\n\nREADME.rst\n\n\n\n\nmpmath\n\n \n \n \n\nA Python library for arbitrary-precision floating-point arithmetic.\nWebsite: https://mpmath.org/\nMain author: Fredrik Johansson <fredrik.johansson@gmail.com>\nMpmath is free software released under the New BSD License (see the\nLICENSE file for details).\n\n0. History and credits\nThe following people (among others) have contributed major patches\nor new features to mpmath:\n\nPearu Peterson <pearu.peterson@gmail.com>\nMario Pernici <mario.pernici@mi.infn.it>\nOndrej Certik <ondrej@certik.cz>\nVinzent Steinberg <vinzent.steinberg@gmail.cm>\nNimish Telang <ntelang@gmail.com>\nMike Taschuk <mtaschuk@ece.ualberta.ca>\nCase Van Horsen <casevh@gmail.com>\nJorn Baayen <jorn.baayen@gmail.com>\nChris Smith <smichr@gmail.com>\nJuan Arias de Reyna <arias@us.es>\nIoannis Tziakos <itziakos@gmail.com>\nAaron Meurer <asmeurer@gmail.com>\nStefan Krastanov <krastanov.stefan@gmail.com>\nKen Allen <ken.allen@sbcglobal.net>\nTimo Hartmann <thartmann15@gmail.com>\nSergey B Kirpichev <skirpichev@gmail.com>\nKris Kuhlman <kristopher.kuhlman@gmail.com>\nPaul Masson <paulmasson@analyticphysics.com>\nMichael Kagalenko <michael.kagalenko@gmail.com>\nJonathan Warner <warnerjon12@gmail.com>\nMax Gaukler <max.gaukler@fau.de>\nGuillermo Navas-Palencia <g.navas.palencia@gmail.com>\nNike Dattani <nike@hpqc.org>\n\nNumerous other people have contributed by reporting bugs,\nrequesting new features, or suggesting improvements to the\ndocumentation.\nFor a detailed changelog, including individual contributions,\nsee the CHANGES file.\nFredrik's work on mpmath during summer 2008 was sponsored by Google\nas part of the Google Summer of Code program.\nFredrik's work on mpmath during summer 2009 was sponsored by the\nAmerican Institute of Mathematics under the support of the National Science\nFoundation Grant No. 0757627 (FRG: L-functions and Modular Forms).\nAny opinions, findings, and conclusions or recommendations expressed in this\nmaterial are those of the author(s) and do not necessarily reflect the\nviews of the sponsors.\nCredit also goes to:\n\nThe authors of the GMP library and the Python wrapper\ngmpy, enabling mpmath to become much faster at\nhigh precision\nThe authors of MPFR, pari/gp, MPFUN, and other arbitrary-\nprecision libraries, whose documentation has been helpful\nfor implementing many of the algorithms in mpmath\nWikipedia contributors; Abramowitz & Stegun; Gradshteyn & Ryzhik;\nWolfram Research for MathWorld and the Wolfram Functions site.\nThese are the main references used for special functions\nimplementations.\nGeorge Brandl for developing the Sphinx documentation tool\nused to build mpmath's documentation\n\nRelease history:\n\nVersion 1.3.0 released on March 7, 2023\nVersion 1.2.1 released on February 9, 2021\nVersion 1.2.0 released on February 1, 2021\nVersion 1.1.0 released on December 11, 2018\nVersion 1.0.0 released on September 27, 2017\nVersion 0.19 released on June 10, 2014\nVersion 0.18 released on December 31, 2013\nVersion 0.17 released on February 1, 2011\nVersion 0.16 released on September 24, 2010\nVersion 0.15 released on June 6, 2010\nVersion 0.14 released on February 5, 2010\nVersion 0.13 released on August 13, 2009\nVersion 0.12 released on June 9, 2009\nVersion 0.11 released on January 26, 2009\nVersion 0.10 released on October 15, 2008\nVersion 0.9 released on August 23, 2008\nVersion 0.8 released on April 20, 2008\nVersion 0.7 released on March 12, 2008\nVersion 0.6 released on January 13, 2008\nVersion 0.5 released on November 24, 2007\nVersion 0.4 released on November 3, 2007\nVersion 0.3 released on October 5, 2007\nVersion 0.2 released on October 2, 2007\nVersion 0.1 released on September 27, 2007\n\n\n1. Download & installation\nMpmath requires Python 3.8 or later versions. It has been tested\nwith CPython 3.8 through 3.12 and for PyPy.\nThe latest release of mpmath can be downloaded from the mpmath\nwebsite and from https://github.com/mpmath/mpmath/releases\nIt should also be available in the Python Package Index at\nhttps://pypi.python.org/pypi/mpmath\nTo install latest release of Mpmath with pip, simply run\npip install mpmath\nor from the source tree\npip install .\nThe latest development code is available from\nhttps://github.com/mpmath/mpmath\nSee the main documentation for more detailed instructions.\n\n2. Documentation\nDocumentation in reStructuredText format is available in the\ndocs directory included with the source package. These files\nare human-readable, but can be compiled to prettier HTML using\nSphinx.\nThe most recent documentation is also available in HTML format:\nhttps://mpmath.org/doc/current/\n\n3. Running tests\nThe unit tests in mpmath/tests/ can be run with pytest, see the main documentation.\nYou may also want to check out the demo scripts in the demo\ndirectory.\nThe master branch is automatically tested on the Github Actions.\n\n4. Known problems\nMpmath is a work in progress. Major issues include:\n\nSome functions may return incorrect values when given extremely\nlarge arguments or arguments very close to singularities.\nDirected rounding works for arithmetic operations. It is implemented\nheuristically for other operations, and their results may be off by one\nor two units in the last place (even if otherwise accurate).\nSome IEEE 754 features are not available. Inifinities and NaN are\npartially supported; denormal rounding is currently not available\nat all.\nThe interface for switching precision and rounding is not finalized.\nThe current method is not threadsafe.\n\n\n5. Help and bug reports\nGeneral questions and comments can be sent to the mpmath mailinglist,\nmailto:mpmath@googlegroups.com\nYou can also report bugs and send patches to the mpmath issue tracker,\nhttps://github.com/mpmath/mpmath/issues\n\n\n", "description": "Arbitrary precision floating point arithmetic"}, {"name": "moviepy", "readme": "\n\n\n\nREADME.md\n\n\n\n\ntqdm\nInstantly make your loops show a progress meter - just wrap any iterator with \"tqdm(iterator)\", and you're done!\nNote: an actively developed version is here: https://github.com/tqdm/tqdm\n\ntqdm (read taqadum, \u062a\u0642\u062f\u0651\u0645) means \"progress\" in arabic.\nYou can also use trange(N) as a shortcut for tqdm(xrange(N))\nHere's the doc:\ndef tqdm(iterable, desc='', total=None, leave=False, mininterval=0.5, miniters=1):\n    \"\"\"\n    Get an iterable object, and return an iterator which acts exactly like the\n    iterable, but prints a progress meter and updates it every time a value is\n    requested.\n    'desc' can contain a short string, describing the progress, that is added\n    in the beginning of the line.\n    'total' can give the number of expected iterations. If not given,\n    len(iterable) is used if it is defined.\n    If leave is False, tqdm deletes its traces from screen after it has finished\n    iterating over all elements.\n    If less than mininterval seconds or miniters iterations have passed since\n    the last progress meter update, it is not updated again.\n    \"\"\"\n\ndef trange(*args, **kwargs):\n    \"\"\"A shortcut for writing tqdm(xrange)\"\"\"\n    return tqdm(xrange(*args), **kwargs)\n\n\n", "description": "Video editing with Python"}, {"name": "monotonic", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nmonotonic\nLicense\n\n\n\n\n\nREADME.md\n\n\n\n\nmonotonic\nNOTE: This library is considered stable and complete, and will not receive\nany further updates. Python versions 3.3 and newer include\ntime.monotonic() in the standard library.\nThis module provides a monotonic() function which returns the\nvalue (in fractional seconds) of a clock which never goes backwards.\nIt is compatible with Python 2 and Python 3.\nOn Python 3.3 or newer, monotonic will be an alias of\ntime.monotonic from the standard library. On older versions,\nit will fall back to an equivalent implementation:\n\n\n\nOS\nImplementation\n\n\n\n\nLinux, BSD, AIX\nclock_gettime\n\n\nWindows\nGetTickCount or GetTickCount64\n\n\nOS X\nmach_absolute_time\n\n\n\nIf no suitable implementation exists for the current platform,\nattempting to import this module (or to import from it) will\ncause a RuntimeError exception to be raised.\nmonotonic is available via the Python Cheese Shop (PyPI):\nhttps://pypi.python.org/pypi/monotonic/\nLicense\nCopyright 2014, 2015, 2016, 2017 Ori Livneh ori@wikimedia.org\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\n\n", "description": "Monotonically increasing time utilities for Python."}, {"name": "mne", "readme": "\n    \n\n\nMNE-Python\nMNE-Python software is an open-source Python package for exploring,\nvisualizing, and analyzing human neurophysiological data such as MEG, EEG, sEEG,\nECoG, and more. It includes modules for data input/output, preprocessing,\nvisualization, source estimation, time-frequency analysis, connectivity analysis,\nmachine learning, and statistics.\n\nDocumentation\nMNE documentation for MNE-Python is available online.\n\n\nForum\nOur user forum is https://mne.discourse.group and is the best place to ask\nquestions about MNE-Python usage or about the contribution process. It also\nincludes job opportunities and other announcements.\n\n\nInstalling MNE-Python\nTo install the latest stable version of MNE-Python, you can use pip in a terminal:\n$ pip install --upgrade mne\n\nMNE-Python 0.17 was the last release to support Python 2.7\nMNE-Python 0.18 requires Python 3.5 or higher\nMNE-Python 0.21 requires Python 3.6 or higher\nMNE-Python 0.24 requires Python 3.7 or higher\nMNE-Python 1.4 requires Python 3.8 or higher\n\nFor more complete instructions and more advanced installation methods (e.g. for\nthe latest development version), see the installation guide.\n\n\nGet the latest code\nTo install the latest version of the code using pip open a terminal and type:\n$ pip install --upgrade git+https://github.com/mne-tools/mne-python@main\nTo get the latest code using git, open a terminal and type:\n$ git clone https://github.com/mne-tools/mne-python.git\n\n\nDependencies\nThe minimum required dependencies to run MNE-Python are:\n\nPython >= 3.8\nNumPy >= 1.20.2\nSciPy >= 1.6.3\nMatplotlib >= 3.4.0\npooch >= 1.5\ntqdm\nJinja2\ndecorator\n\nFor full functionality, some functions require:\n\nScikit-learn >= 0.24.2\njoblib >= 0.15 (for parallelization control)\nmne-qt-browser >= 0.1 (for fast raw data visualization)\nQt5 >= 5.12 via one of the following bindings (for fast raw data visualization and interactive 3D visualization):\n\nPyQt6 >= 6.0\nPySide6 >= 6.0\nPyQt5 >= 5.12\nPySide2 >= 5.12\n\n\nNumba >= 0.53.1\nNiBabel >= 3.2.1\nOpenMEEG >= 2.5.6\nPandas >= 1.2.4\nPicard >= 0.3\nCuPy >= 9.0.0 (for NVIDIA CUDA acceleration)\nDIPY >= 1.4.0\nImageio >= 2.8.0\nPyVista >= 0.32 (for 3D visualization)\npyvistaqt >= 0.4 (for 3D visualization)\nmffpy >= 0.5.7\nh5py\nh5io\npymatreader\n\n\n\nContributing to MNE-Python\nPlease see the documentation on the MNE-Python homepage:\nhttps://mne.tools/dev/install/contributing.html\n\n\nLicensing\nMNE-Python is BSD-licenced (BSD-3-Clause):\n\nThis software is OSI Certified Open Source Software.\nOSI Certified is a certification mark of the Open Source Initiative.\nCopyright (c) 2011-2022, authors of MNE-Python.\nAll rights reserved.\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\nRedistributions of source code must retain the above copyright notice,\nthis list of conditions and the following disclaimer.\nRedistributions in binary form must reproduce the above copyright notice,\nthis list of conditions and the following disclaimer in the documentation\nand/or other materials provided with the distribution.\nNeither the names of MNE-Python authors nor the names of any\ncontributors may be used to endorse or promote products derived from\nthis software without specific prior written permission.\n\nThis software is provided by the copyright holders and contributors\n\u201cas is\u201d and any express or implied warranties, including, but not\nlimited to, the implied warranties of merchantability and fitness for\na particular purpose are disclaimed. In no event shall the copyright\nowner or contributors be liable for any direct, indirect, incidental,\nspecial, exemplary, or consequential damages (including, but not\nlimited to, procurement of substitute goods or services; loss of use,\ndata, or profits; or business interruption) however caused and on any\ntheory of liability, whether in contract, strict liability, or tort\n(including negligence or otherwise) arising in any way out of the use\nof this software, even if advised of the possibility of such\ndamage.\n\n\n\n", "description": "Python for exploring, visualizing and analyzing neurophysiological data."}, {"name": "mizani", "readme": "\nMizani\n\n\n\n\n\nMizani is a scales package for graphics. It is written in Python and is\nbased on Hadley Wickham's Scales.\nSee the documentation\nfor how to use it in a graphics system.\nInstallation\nOfficial Release\n$ pip install mizani\n\nDevelopment version\n$ pip install git+https://github.com/has2k1/mizani.git@main\n\n", "description": "Scales library for graphics."}, {"name": "mistune", "readme": "\nA fast yet powerful Python Markdown parser with renderers and plugins.\n\nOverview\nConvert Markdown to HTML with ease:\nimport mistune\nmistune.html(your_markdown_text)\n\n\nUseful Links\n\nGitHub: https://github.com/lepture/mistune\nDocs: https://mistune.lepture.com/\n\n\n\nLicense\nMistune is licensed under BSD. Please see LICENSE for licensing details.\n\n", "description": "Markdown parser in pure Python with renderers for IPython/Jupyter Notebook."}, {"name": "matplotlib", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\nMatplotlib is a comprehensive library for creating static, animated, and\ninteractive visualizations in Python.\nCheck out our home page for more information.\n\nMatplotlib produces publication-quality figures in a variety of hardcopy\nformats and interactive environments across platforms. Matplotlib can be\nused in Python scripts, Python/IPython shells, web application servers,\nand various graphical user interface toolkits.\nInstall\nSee the install\ndocumentation,\nwhich is generated from /doc/users/installing/index.rst\nContribute\nYou've discovered a bug or something else you want to change -\nexcellent!\nYou've worked out a way to fix it -- even better!\nYou want to tell us about it -- best of all!\nStart at the contributing\nguide!\nContact\nDiscourse is the discussion forum\nfor general questions and discussions and our recommended starting\npoint.\nOur active mailing lists (which are mirrored on Discourse) are:\n\nUsers\nmailing list: matplotlib-users@python.org\nAnnouncement\nmailing list: matplotlib-announce@python.org\nDevelopment\nmailing list: matplotlib-devel@python.org\n\nGitter is for coordinating\ndevelopment and asking questions directly related to contributing to\nmatplotlib.\nCiting Matplotlib\nIf Matplotlib contributes to a project that leads to publication, please\nacknowledge this by citing Matplotlib.\nA ready-made citation\nentry is\navailable.\n", "description": "venn - Plots Venn diagrams with Matplotlib."}, {"name": "matplotlib-venn", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nVenn diagram plotting routines for Python/Matplotlib\nInstallation\nDependencies\nUsage\nQuestions\nSee also\n\n\n\n\n\nREADME.rst\n\n\n\n\nVenn diagram plotting routines for Python/Matplotlib\n\nRoutines for plotting area-weighted two- and three-circle venn diagrams.\n\nInstallation\nThe simplest way to install the package is via easy_install or\npip:\n$ easy_install matplotlib-venn\n\n\nDependencies\n\nnumpy,\nscipy,\nmatplotlib.\n\n\nUsage\nThe package provides four main functions: venn2,\nvenn2_circles, venn3 and venn3_circles.\nThe functions venn2 and venn2_circles accept as their only\nrequired argument a 3-element list (Ab, aB, AB) of subset sizes,\ne.g.:\nvenn2(subsets = (3, 2, 1))\n\nand draw a two-circle venn diagram with respective region areas. In\nthe particular example, the region, corresponding to subset A and\nnot B will be three times larger in area than the region,\ncorresponding to subset A and B. Alternatively, you can simply\nprovide a list of two set or Counter (i.e. multi-set) objects instead (new in version 0.7),\ne.g.:\nvenn2([set(['A', 'B', 'C', 'D']), set(['D', 'E', 'F'])])\n\nSimilarly, the functions venn3 and venn3_circles take a\n7-element list of subset sizes (Abc, aBc, ABc, abC, AbC, aBC,\nABC), and draw a three-circle area-weighted venn\ndiagram. Alternatively, you can provide a list of three set or Counter objects\n(rather than counting sizes for all 7 subsets).\nThe functions venn2_circles and venn3_circles draw just the\ncircles, whereas the functions venn2 and venn3 draw the\ndiagrams as a collection of colored patches, annotated with text\nlabels. In addition (version 0.7+), functions venn2_unweighted and\nvenn3_unweighted draw the Venn diagrams without area-weighting.\nNote that for a three-circle venn diagram it is not in general\npossible to achieve exact correspondence between the required set\nsizes and region areas, however in most cases the picture will still\nprovide a decent indication.\nThe functions venn2_circles and venn3_circles return the list of matplotlib.patch.Circle objects that may be tuned further\nto your liking. The functions venn2 and venn3 return an object of class VennDiagram,\nwhich gives access to constituent patches, text elements, and (since\nversion 0.7) the information about the centers and radii of the\ncircles.\nBasic Example:\nfrom matplotlib_venn import venn2\nvenn2(subsets = (3, 2, 1))\n\nFor the three-circle case:\nfrom matplotlib_venn import venn3\nvenn3(subsets = (1, 1, 1, 2, 1, 2, 2), set_labels = ('Set1', 'Set2', 'Set3'))\n\nA more elaborate example:\nfrom matplotlib import pyplot as plt\nimport numpy as np\nfrom matplotlib_venn import venn3, venn3_circles\nplt.figure(figsize=(4,4))\nv = venn3(subsets=(1, 1, 1, 1, 1, 1, 1), set_labels = ('A', 'B', 'C'))\nv.get_patch_by_id('100').set_alpha(1.0)\nv.get_patch_by_id('100').set_color('white')\nv.get_label_by_id('100').set_text('Unknown')\nv.get_label_by_id('A').set_text('Set \"A\"')\nc = venn3_circles(subsets=(1, 1, 1, 1, 1, 1, 1), linestyle='dashed')\nc[0].set_lw(1.0)\nc[0].set_ls('dotted')\nplt.title(\"Sample Venn diagram\")\nplt.annotate('Unknown set', xy=v.get_label_by_id('100').get_position() - np.array([0, 0.05]), xytext=(-70,-70),\n             ha='center', textcoords='offset points', bbox=dict(boxstyle='round,pad=0.5', fc='gray', alpha=0.1),\n             arrowprops=dict(arrowstyle='->', connectionstyle='arc3,rad=0.5',color='gray'))\nplt.show()\n\nAn example with multiple subplots (new in version 0.6):\nfrom matplotlib_venn import venn2, venn2_circles\nfigure, axes = plt.subplots(2, 2)\nvenn2(subsets={'10': 1, '01': 1, '11': 1}, set_labels = ('A', 'B'), ax=axes[0][0])\nvenn2_circles((1, 2, 3), ax=axes[0][1])\nvenn3(subsets=(1, 1, 1, 1, 1, 1, 1), set_labels = ('A', 'B', 'C'), ax=axes[1][0])\nvenn3_circles({'001': 10, '100': 20, '010': 21, '110': 13, '011': 14}, ax=axes[1][1])\nplt.show()\n\nPerhaps the most common use case is generating a Venn diagram given\nthree sets of objects:\nset1 = set(['A', 'B', 'C', 'D'])\nset2 = set(['B', 'C', 'D', 'E'])\nset3 = set(['C', 'D',' E', 'F', 'G'])\n\nvenn3([set1, set2, set3], ('Set1', 'Set2', 'Set3'))\nplt.show()\n\n\nQuestions\n\nIf you ask your questions at StackOverflow and tag them matplotlib-venn, chances are high you'll get an answer from the maintainer of this package.\n\n\nSee also\n\nReport issues and submit fixes at Github:\nhttps://github.com/konstantint/matplotlib-venn\nCheck out the DEVELOPER-README.rst for development-related notes.\n\nSome alternative means of plotting a Venn diagram (as of\nOctober 2012) are reviewed in the blog post:\nhttp://fouryears.eu/2012/10/13/venn-diagrams-in-python/\n\nThe matplotlib-subsets package\nvisualizes a hierarchy of sets as a tree of rectangles.\n\nThe matplotlib_venn_wordcloud package\ncombines Venn diagrams with word clouds for a pretty amazing (and amusing) result.\n\n\n\n\n"}, {"name": "matplotlib-inline", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nMatplotlib Inline Back-end for IPython and Jupyter\nInstallation\nUsage\nLicense\n\n\n\n\n\nREADME.md\n\n\n\n\nMatplotlib Inline Back-end for IPython and Jupyter\nThis package provides support for matplotlib to display figures directly inline in the Jupyter notebook and related clients, as shown below.\nInstallation\nWith conda:\nconda install -c conda-forge matplotlib-inline\nWith pip:\npip install matplotlib-inline\nUsage\nNote that in current versions of JupyterLab and Jupyter Notebook, the explicit use of the %matplotlib inline directive is not needed anymore, though other third-party clients may still require it.\nThis will produce a figure immediately below:\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.linspace(0, 3*np.pi, 500)\nplt.plot(x, np.sin(x**2))\nplt.title('A simple chirp');\nLicense\nLicensed under the terms of the BSD 3-Clause License, by the IPython Development Team (see LICENSE file).\n\n\n"}, {"name": "MarkupSafe", "readme": "\nMarkupSafe implements a text object that escapes characters so it is\nsafe to use in HTML and XML. Characters that have special meanings are\nreplaced so that they display as the actual characters. This mitigates\ninjection attacks, meaning untrusted user input can safely be displayed\non a page.\n\nInstalling\nInstall and update using pip:\npip install -U MarkupSafe\n\n\nExamples\n>>> from markupsafe import Markup, escape\n\n>>> # escape replaces special characters and wraps in Markup\n>>> escape(\"<script>alert(document.cookie);</script>\")\nMarkup('&lt;script&gt;alert(document.cookie);&lt;/script&gt;')\n\n>>> # wrap in Markup to mark text \"safe\" and prevent escaping\n>>> Markup(\"<strong>Hello</strong>\")\nMarkup('<strong>hello</strong>')\n\n>>> escape(Markup(\"<strong>Hello</strong>\"))\nMarkup('<strong>hello</strong>')\n\n>>> # Markup is a str subclass\n>>> # methods and operators escape their arguments\n>>> template = Markup(\"Hello <em>{name}</em>\")\n>>> template.format(name='\"World\"')\nMarkup('Hello <em>&#34;World&#34;</em>')\n\n\nDonate\nThe Pallets organization develops and supports MarkupSafe and other\npopular packages. In order to grow the community of contributors and\nusers, and allow the maintainers to devote more time to the projects,\nplease donate today.\n\n\nLinks\n\nDocumentation: https://markupsafe.palletsprojects.com/\nChanges: https://markupsafe.palletsprojects.com/changes/\nPyPI Releases: https://pypi.org/project/MarkupSafe/\nSource Code: https://github.com/pallets/markupsafe/\nIssue Tracker: https://github.com/pallets/markupsafe/issues/\nChat: https://discord.gg/pallets\n\n\n", "description": "Implements a text object that escapes characters to make strings safe for using in HTML and XML."}, {"name": "markdownify", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nInstallation\nUsage\nOptions\nConverting BeautifulSoup objects\nCreating Custom Converters\nCommand Line Interface\nDevelopment\n\n\n\n\n\nREADME.rst\n\n\n\n\n   \n\nInstallation\npip install markdownify\n\nUsage\nConvert some HTML to Markdown:\nfrom markdownify import markdownify as md\nmd('<b>Yay</b> <a href=\"http://github.com\">GitHub</a>')  # > '**Yay** [GitHub](http://github.com)'\nSpecify tags to exclude:\nfrom markdownify import markdownify as md\nmd('<b>Yay</b> <a href=\"http://github.com\">GitHub</a>', strip=['a'])  # > '**Yay** GitHub'\n...or specify the tags you want to include:\nfrom markdownify import markdownify as md\nmd('<b>Yay</b> <a href=\"http://github.com\">GitHub</a>', convert=['b'])  # > '**Yay** GitHub'\n\nOptions\nMarkdownify supports the following options:\n\nstrip\nA list of tags to strip. This option can't be used with the\nconvert option.\nconvert\nA list of tags to convert. This option can't be used with the\nstrip option.\nautolinks\nA boolean indicating whether the \"automatic link\" style should be used when\na a tag's contents match its href. Defaults to True.\ndefault_title\nA boolean to enable setting the title of a link to its href, if no title is\ngiven. Defaults to False.\nheading_style\nDefines how headings should be converted. Accepted values are ATX,\nATX_CLOSED, SETEXT, and UNDERLINED (which is an alias for\nSETEXT). Defaults to UNDERLINED.\nbullets\nAn iterable (string, list, or tuple) of bullet styles to be used. If the\niterable only contains one item, it will be used regardless of how deeply\nlists are nested. Otherwise, the bullet will alternate based on nesting\nlevel. Defaults to '*+-'.\nstrong_em_symbol\nIn markdown, both * and _ are used to encode strong or\nemphasized texts. Either of these symbols can be chosen by the options\nASTERISK (default) or UNDERSCORE respectively.\nsub_symbol, sup_symbol\nDefine the chars that surround <sub> and <sup> text. Defaults to an\nempty string, because this is non-standard behavior. Could be something like\n~ and ^ to result in ~sub~ and ^sup^.\nnewline_style\nDefines the style of marking linebreaks (<br>) in markdown. The default\nvalue SPACES of this option will adopt the usual two spaces and a newline,\nwhile BACKSLASH will convert a linebreak to \\\\n (a backslash and a\nnewline). While the latter convention is non-standard, it is commonly\npreferred and supported by a lot of interpreters.\ncode_language\nDefines the language that should be assumed for all <pre> sections.\nUseful, if all code on a page is in the same programming language and\nshould be annotated with ```python or similar.\nDefaults to '' (empty string) and can be any string.\ncode_language_callback\nWhen the HTML code contains pre tags that in some way provide the code\nlanguage, for example as class, this callback can be used to extract the\nlanguage from the tag and prefix it to the converted pre tag.\nThe callback gets one single argument, an BeautifylSoup object, and returns\na string containing the code language, or None.\nAn example to use the class name as code language could be:\ndef callback(el):\n    return el['class'][0] if el.has_attr('class') else None\n\nDefaults to None.\n\nescape_asterisks\nIf set to False, do not escape * to \\* in text.\nDefaults to True.\nescape_underscores\nIf set to False, do not escape _ to \\_ in text.\nDefaults to True.\nkeep_inline_images_in\nImages are converted to their alt-text when the images are located inside\nheadlines or table cells. If some inline images should be converted to\nmarkdown images instead, this option can be set to a list of parent tags\nthat should be allowed to contain inline images, for example ['td'].\nDefaults to an empty list.\nwrap, wrap_width\nIf wrap is set to True, all text paragraphs are wrapped at\nwrap_width characters. Defaults to False and 80.\nUse with newline_style=BACKSLASH to keep line breaks in paragraphs.\n\nOptions may be specified as kwargs to the markdownify function, or as a\nnested Options class in MarkdownConverter subclasses.\n\nConverting BeautifulSoup objects\nfrom markdownify import MarkdownConverter\n\n# Create shorthand method for conversion\ndef md(soup, **options):\n    return MarkdownConverter(**options).convert_soup(soup)\n\nCreating Custom Converters\nIf you have a special usecase that calls for a special conversion, you can\nalways inherit from MarkdownConverter and override the method you want to\nchange:\nfrom markdownify import MarkdownConverter\n\nclass ImageBlockConverter(MarkdownConverter):\n    \"\"\"\n    Create a custom MarkdownConverter that adds two newlines after an image\n    \"\"\"\n    def convert_img(self, el, text, convert_as_inline):\n        return super().convert_img(el, text, convert_as_inline) + '\\n\\n'\n\n# Create shorthand method for conversion\ndef md(html, **options):\n    return ImageBlockConverter(**options).convert(html)\n\nCommand Line Interface\nUse markdownify example.html > example.md or pipe input from stdin\n(cat example.html | markdownify > example.md).\nCall markdownify -h to see all available options.\nThey are the same as listed above and take the same arguments.\n\nDevelopment\nTo run tests and the linter run pip install tox once, then tox.\n\n\n", "description": "HTML to Markdown converter."}, {"name": "markdown2", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nInstall\nQuick Usage\nExtra Syntax (aka extensions)\nProject\nContributing\nTest Suite\n\n\n\n\n\nREADME.md\n\n\n\n\nMarkdown is a light text markup format and a processor to convert that to HTML.\nThe originator describes it as follows:\n\nMarkdown is a text-to-HTML conversion tool for web writers.\nMarkdown allows you to write using an easy-to-read,\neasy-to-write plain text format, then convert it to\nstructurally valid XHTML (or HTML).\n-- http://daringfireball.net/projects/markdown/\n\nThis (markdown2) is a fast and complete Python implementation of Markdown. It\nwas written to closely match the behaviour of the original Perl-implemented\nMarkdown.pl. Markdown2 also comes with a number of extensions (called\n\"extras\") for things like syntax coloring, tables, header-ids. See the\n\"Extra Syntax\" section below. \"markdown2\" supports all Python versions\n3.5+ (and pypy and jython, though I don't frequently test those).\nThere is another Python\nmarkdown.py. However, at\nleast at the time this project was started, markdown2.py was faster (see the\nPerformance\nNotes) and,\nto my knowledge, more correct (see Testing\nNotes).\nThat was a while ago though, so you shouldn't discount Python-markdown from\nyour consideration.\nFollow @trentmick\nfor updates to python-markdown2.\nInstall\nTo install it in your Python installation run one of the following:\npip install markdown2\npip install markdown2[all]  # to install all optional dependencies (eg: Pygments for code syntax highlighting)\npypm install markdown2      # if you use ActivePython (activestate.com/activepython)\neasy_install markdown2      # if this is the best you have\npython setup.py install\n\nHowever, everything you need to run this is in \"lib/markdown2.py\". If it is\neasier for you, you can just copy that file to somewhere on your PythonPath\n(to use as a module) or executable path (to use as a script).\nQuick Usage\nAs a module:\n>>> import markdown2\n>>> markdown2.markdown(\"*boo!*\")  # or use `html = markdown_path(PATH)`\n'<p><em>boo!</em></p>\\n'\n\n>>> from markdown2 import Markdown\n>>> markdowner = Markdown()\n>>> markdowner.convert(\"*boo!*\")\n'<p><em>boo!</em></p>\\n'\n>>> markdowner.convert(\"**boom!**\")\n'<p><strong>boom!</strong></p>\\n'\nAs a script (CLI):\n$ python markdown2.py foo.md > foo.html\nor\n$ python -m markdown2 foo.md > foo.html\nI think pip-based installation will enable this as well:\n$ markdown2 foo.md > foo.html\nSee the project wiki,\nlib/markdown2.py\ndocstrings and/or python markdown2.py --help for more details.\nExtra Syntax (aka extensions)\nMany Markdown processors include support for additional optional syntax\n(often called \"extensions\") and markdown2 is no exception. With markdown2 these\nare called \"extras\".  Using the \"footnotes\" extra as an example, here is how\nyou use an extra ... as a module:\n$ python markdown2.py --extras footnotes foo.md > foo.html\nas a script:\n>>> import markdown2\n>>> markdown2.markdown(\"*boo!*\", extras=[\"footnotes\"])\n'<p><em>boo!</em></p>\\n'\nThere are a number of currently implemented extras for tables, footnotes,\nsyntax coloring of <pre>-blocks, auto-linking patterns, table of contents,\nSmarty Pants (for fancy quotes, dashes, etc.) and more. See the Extras\nwiki page for full\ndetails.\nProject\nThe python-markdown2 project lives at\nhttps://github.com/trentm/python-markdown2/.  (Note: On Mar 6, 2011 this\nproject was moved from Google Code\nto here on Github.) See also, markdown2 on the Python Package Index\n(PyPI).\nThe change log: https://github.com/trentm/python-markdown2/blob/master/CHANGES.md\nTo report a bug: https://github.com/trentm/python-markdown2/issues\nContributing\nWe welcome pull requests from the community. Please take a look at the TODO for opportunities to help this project. For those wishing to submit a pull request to python-markdown2 please ensure it fulfills the following requirements:\n\nIt must pass PEP8.\nIt must include relevant test coverage.\nBug fixes must include a regression test that exercises the bug.\nThe entire test suite must pass.\nThe README and/or docs are updated accordingly.\n\nTest Suite\nThis markdown implementation passes a fairly extensive test suite. To run it:\nmake test\nThe crux of the test suite is a number of \"cases\" directories -- each with a\nset of matching .text (input) and .html (expected output) files. These are:\ntm-cases/                   Tests authored for python-markdown2 (tm==\"Trent Mick\")\nmarkdowntest-cases/         Tests from the 3rd-party MarkdownTest package\nphp-markdown-cases/         Tests from the 3rd-party MDTest package\nphp-markdown-extra-cases/   Tests also from MDTest package\n\nSee the Testing Notes wiki\npage for full\ndetails.\n\n\n", "description": "Fast and complete Markdown parser for Python."}, {"name": "lxml", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWhat is lxml?\nSupport the project\nProject income report\nLegal Notice for Donations\n\n\n\n\n\nREADME.rst\n\n\n\n\nWhat is lxml?\nlxml is the most feature-rich and easy-to-use library for processing XML and HTML in the Python language.\nIt's also very fast and memory friendly, just so you know.\nFor an introduction and further documentation, see doc/main.txt.\nFor installation information, see INSTALL.txt.\nFor issue tracker, see https://bugs.launchpad.net/lxml\n\nSupport the project\nlxml has been downloaded from the Python Package Index\nmillions of times and is also available directly in many package\ndistributions, e.g. for Linux or macOS.\nMost people who use lxml do so because they like using it.\nYou can show us that you like it by blogging about your experience\nwith it and linking to the project website.\nIf you are using lxml for your work and feel like giving a bit of\nyour own benefit back to support the project, consider sending us\nmoney through GitHub Sponsors, Tidelift or PayPal that we can use\nto buy us free time for the maintenance of this great library, to\nfix bugs in the software, review and integrate code contributions,\nto improve its features and documentation, or to just take a deep\nbreath and have a cup of tea every once in a while.\nPlease read the Legal Notice below, at the bottom of this page.\nThank you for your support.\nSupport lxml through GitHub Sponsors\nvia a Tidelift subscription\nor via PayPal:\n\nPlease contact Stefan Behnel\nfor other ways to support the lxml project,\nas well as commercial consulting, customisations and trainings on lxml and\nfast Python XML processing.\nNote that we are not accepting donations in crypto currencies.\nMuch of the development and hosting for lxml is done in a carbon-neutral way\nor with compensated and very low emissions.\nCrypto currencies do not fit into that ambition.\nAppVeyor and GitHub Actions\nsupport the lxml project with their build and CI servers.\nJetbrains supports the lxml project by donating free licenses of their\nPyCharm IDE.\nAnother supporter of the lxml project is\nCOLOGNE Webdesign.\n\nProject income report\nlxml has about 60 million downloads\nper month on PyPI.\n\nTotal project income in 2022: EUR 2566.38  (213.87 \u20ac / month)\nTidelift: EUR 2539.38\nPaypal: EUR 27.00\n\n\nTotal project income in 2021: EUR 4640.37  (386.70 \u20ac / month)\nTidelift: EUR 4066.66\nPaypal: EUR 223.71\nother: EUR 350.00\n\n\nTotal project income in 2020: EUR 6065,86  (506.49 \u20ac / month)\nTidelift: EUR 4064.77\nPaypal: EUR 1401.09\nother: EUR 600.00\n\n\nTotal project income in 2019: EUR 717.52  (59.79 \u20ac / month)\nTidelift: EUR 360.30\nPaypal: EUR 157.22\nother: EUR 200.00\n\n\n\n\nLegal Notice for Donations\nAny donation that you make to the lxml project is voluntary and\nis not a fee for any services, goods, or advantages.  By making\na donation to the lxml project, you acknowledge that we have the\nright to use the money you donate in any lawful way and for any\nlawful purpose we see fit and we are not obligated to disclose\nthe way and purpose to any party unless required by applicable\nlaw.  Although lxml is free software, to the best of our knowledge\nthe lxml project does not have any tax exempt status.  The lxml\nproject is neither a registered non-profit corporation nor a\nregistered charity in any country.  Your donation may or may not\nbe tax-deductible; please consult your tax advisor in this matter.\nWe will not publish or disclose your name and/or e-mail address\nwithout your consent, unless required by applicable law.  Your\ndonation is non-refundable.\n\n\n", "description": "Pythonic XML processing library using libxml2 and libxslt."}, {"name": "loguru", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nInstallation\nFeatures\nTake the tour\nReady to use out of the box without boilerplate\nNo Handler, no Formatter, no Filter: one function to rule them all\nEasier file logging with rotation / retention / compression\nModern string formatting using braces style\nExceptions catching within threads or main\nPretty logging with colors\nAsynchronous, Thread-safe, Multiprocess-safe\nFully descriptive exceptions\nStructured logging as needed\nLazy evaluation of expensive functions\nCustomizable levels\nBetter datetime handling\nSuitable for scripts and libraries\nEntirely compatible with standard logging\nPersonalizable defaults through environment variables\nConvenient parser\nExhaustive notifier\n10x faster than built-in logging\nDocumentation\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nLoguru is a library which aims to bring enjoyable logging in Python.\nDid you ever feel lazy about configuring a logger and used print() instead?... I did, yet logging is fundamental to every application and eases the process of debugging. Using Loguru you have no excuse not to use logging from the start, this is as simple as from loguru import logger.\nAlso, this library is intended to make Python logging less painful by adding a bunch of useful functionalities that solve caveats of the standard loggers. Using logs in your application should be an automatism, Loguru tries to make it both pleasant and powerful.\n\nInstallation\npip install loguru\n\nFeatures\n\nReady to use out of the box without boilerplate\nNo Handler, no Formatter, no Filter: one function to rule them all\nEasier file logging with rotation / retention / compression\nModern string formatting using braces style\nExceptions catching within threads or main\nPretty logging with colors\nAsynchronous, Thread-safe, Multiprocess-safe\nFully descriptive exceptions\nStructured logging as needed\nLazy evaluation of expensive functions\nCustomizable levels\nBetter datetime handling\nSuitable for scripts and libraries\nEntirely compatible with standard logging\nPersonalizable defaults through environment variables\nConvenient parser\nExhaustive notifier\n 10x faster than built-in logging \n\n\nTake the tour\n\nReady to use out of the box without boilerplate\nThe main concept of Loguru is that there is one and only one logger.\nFor convenience, it is pre-configured and outputs to stderr to begin with (but that's entirely configurable).\nfrom loguru import logger\n\nlogger.debug(\"That's it, beautiful and simple logging!\")\nThe logger is just an interface which dispatches log messages to configured handlers. Simple, right?\n\nNo Handler, no Formatter, no Filter: one function to rule them all\nHow to add a handler? How to set up logs formatting? How to filter messages? How to set level?\nOne answer: the add() function.\nlogger.add(sys.stderr, format=\"{time} {level} {message}\", filter=\"my_module\", level=\"INFO\")\nThis function should be used to register sinks which are responsible for managing log messages contextualized with a record dict. A sink can take many forms: a simple function, a string path, a file-like object, a coroutine function or a built-in Handler.\nNote that you may also remove() a previously added handler by using the identifier returned while adding it. This is particularly useful if you want to supersede the default stderr handler: just call logger.remove() to make a fresh start.\n\nEasier file logging with rotation / retention / compression\nIf you want to send logged messages to a file, you just have to use a string path as the sink. It can be automatically timed too for convenience:\nlogger.add(\"file_{time}.log\")\nIt is also easily configurable if you need rotating logger, if you want to remove older logs, or if you wish to compress your files at closure.\nlogger.add(\"file_1.log\", rotation=\"500 MB\")    # Automatically rotate too big file\nlogger.add(\"file_2.log\", rotation=\"12:00\")     # New file is created each day at noon\nlogger.add(\"file_3.log\", rotation=\"1 week\")    # Once the file is too old, it's rotated\n\nlogger.add(\"file_X.log\", retention=\"10 days\")  # Cleanup after some time\n\nlogger.add(\"file_Y.log\", compression=\"zip\")    # Save some loved space\n\nModern string formatting using braces style\nLoguru favors the much more elegant and powerful {} formatting over %, logging functions are actually equivalent to str.format().\nlogger.info(\"If you're using Python {}, prefer {feature} of course!\", 3.6, feature=\"f-strings\")\n\nExceptions catching within threads or main\nHave you ever seen your program crashing unexpectedly without seeing anything in the log file? Did you ever notice that exceptions occurring in threads were not logged? This can be solved using the catch() decorator / context manager which ensures that any error is correctly propagated to the logger.\n@logger.catch\ndef my_function(x, y, z):\n    # An error? It's caught anyway!\n    return 1 / (x + y + z)\n\nPretty logging with colors\nLoguru automatically adds colors to your logs if your terminal is compatible. You can define your favorite style by using markup tags in the sink format.\nlogger.add(sys.stdout, colorize=True, format=\"<green>{time}</green> <level>{message}</level>\")\n\nAsynchronous, Thread-safe, Multiprocess-safe\nAll sinks added to the logger are thread-safe by default. They are not multiprocess-safe, but you can enqueue the messages to ensure logs integrity. This same argument can also be used if you want async logging.\nlogger.add(\"somefile.log\", enqueue=True)\nCoroutine functions used as sinks are also supported and should be awaited with complete().\n\nFully descriptive exceptions\nLogging exceptions that occur in your code is important to track bugs, but it's quite useless if you don't know why it failed. Loguru helps you identify problems by allowing the entire stack trace to be displayed, including values of variables (thanks better_exceptions for this!).\nThe code:\n# Caution, \"diagnose=True\" is the default and may leak sensitive data in prod\nlogger.add(\"out.log\", backtrace=True, diagnose=True)\n\ndef func(a, b):\n    return a / b\n\ndef nested(c):\n    try:\n        func(5, c)\n    except ZeroDivisionError:\n        logger.exception(\"What?!\")\n\nnested(0)\nWould result in:\n2018-07-17 01:38:43.975 | ERROR    | __main__:nested:10 - What?!\nTraceback (most recent call last):\n\n  File \"test.py\", line 12, in <module>\n    nested(0)\n    \u2514 <function nested at 0x7f5c755322f0>\n\n> File \"test.py\", line 8, in nested\n    func(5, c)\n    \u2502       \u2514 0\n    \u2514 <function func at 0x7f5c79fc2e18>\n\n  File \"test.py\", line 4, in func\n    return a / b\n           \u2502   \u2514 0\n           \u2514 5\n\nZeroDivisionError: division by zero\n\nNote that this feature won't work on default Python REPL due to unavailable frame data.\nSee also: Security considerations when using Loguru.\n\nStructured logging as needed\nWant your logs to be serialized for easier parsing or to pass them around? Using the serialize argument, each log message will be converted to a JSON string before being sent to the configured sink.\nlogger.add(custom_sink_function, serialize=True)\nUsing bind() you can contextualize your logger messages by modifying the extra record attribute.\nlogger.add(\"file.log\", format=\"{extra[ip]} {extra[user]} {message}\")\ncontext_logger = logger.bind(ip=\"192.168.0.1\", user=\"someone\")\ncontext_logger.info(\"Contextualize your logger easily\")\ncontext_logger.bind(user=\"someone_else\").info(\"Inline binding of extra attribute\")\ncontext_logger.info(\"Use kwargs to add context during formatting: {user}\", user=\"anybody\")\nIt is possible to modify a context-local state temporarily with contextualize():\nwith logger.contextualize(task=task_id):\n    do_something()\n    logger.info(\"End of task\")\nYou can also have more fine-grained control over your logs by combining bind() and filter:\nlogger.add(\"special.log\", filter=lambda record: \"special\" in record[\"extra\"])\nlogger.debug(\"This message is not logged to the file\")\nlogger.bind(special=True).info(\"This message, though, is logged to the file!\")\nFinally, the patch() method allows dynamic values to be attached to the record dict of each new message:\nlogger.add(sys.stderr, format=\"{extra[utc]} {message}\")\nlogger = logger.patch(lambda record: record[\"extra\"].update(utc=datetime.utcnow()))\n\nLazy evaluation of expensive functions\nSometime you would like to log verbose information without performance penalty in production, you can use the opt() method to achieve this.\nlogger.opt(lazy=True).debug(\"If sink level <= DEBUG: {x}\", x=lambda: expensive_function(2**64))\n\n# By the way, \"opt()\" serves many usages\nlogger.opt(exception=True).info(\"Error stacktrace added to the log message (tuple accepted too)\")\nlogger.opt(colors=True).info(\"Per message <blue>colors</blue>\")\nlogger.opt(record=True).info(\"Display values from the record (eg. {record[thread]})\")\nlogger.opt(raw=True).info(\"Bypass sink formatting\\n\")\nlogger.opt(depth=1).info(\"Use parent stack context (useful within wrapped functions)\")\nlogger.opt(capture=False).info(\"Keyword arguments not added to {dest} dict\", dest=\"extra\")\n\nCustomizable levels\nLoguru comes with all standard logging levels to which trace() and success() are added. Do you need more? Then, just create it by using the level() function.\nnew_level = logger.level(\"SNAKY\", no=38, color=\"<yellow>\", icon=\"\ud83d\udc0d\")\n\nlogger.log(\"SNAKY\", \"Here we go!\")\n\nBetter datetime handling\nThe standard logging is bloated with arguments like datefmt or msecs, %(asctime)s and %(created)s, naive datetimes without timezone information, not intuitive formatting, etc. Loguru fixes it:\nlogger.add(\"file.log\", format=\"{time:YYYY-MM-DD at HH:mm:ss} | {level} | {message}\")\n\nSuitable for scripts and libraries\nUsing the logger in your scripts is easy, and you can configure() it at start. To use Loguru from inside a library, remember to never call add() but use disable() instead so logging functions become no-op. If a developer wishes to see your library's logs, they can enable() it again.\n# For scripts\nconfig = {\n    \"handlers\": [\n        {\"sink\": sys.stdout, \"format\": \"{time} - {message}\"},\n        {\"sink\": \"file.log\", \"serialize\": True},\n    ],\n    \"extra\": {\"user\": \"someone\"}\n}\nlogger.configure(**config)\n\n# For libraries, should be your library's `__name__`\nlogger.disable(\"my_library\")\nlogger.info(\"No matter added sinks, this message is not displayed\")\n\n# In your application, enable the logger in the library\nlogger.enable(\"my_library\")\nlogger.info(\"This message however is propagated to the sinks\")\nFor additional convenience, you can also use the loguru-config library to setup the logger directly from a configuration file.\n\nEntirely compatible with standard logging\nWish to use built-in logging Handler as a Loguru sink?\nhandler = logging.handlers.SysLogHandler(address=('localhost', 514))\nlogger.add(handler)\nNeed to propagate Loguru messages to standard logging?\nclass PropagateHandler(logging.Handler):\n    def emit(self, record: logging.LogRecord) -> None:\n        logging.getLogger(record.name).handle(record)\n\nlogger.add(PropagateHandler(), format=\"{message}\")\nWant to intercept standard logging messages toward your Loguru sinks?\nclass InterceptHandler(logging.Handler):\n    def emit(self, record: logging.LogRecord) -> None:\n        # Get corresponding Loguru level if it exists.\n        level: str | int\n        try:\n            level = logger.level(record.levelname).name\n        except ValueError:\n            level = record.levelno\n\n        # Find caller from where originated the logged message.\n        frame, depth = inspect.currentframe(), 0\n        while frame and (depth == 0 or frame.f_code.co_filename == logging.__file__):\n            frame = frame.f_back\n            depth += 1\n\n        logger.opt(depth=depth, exception=record.exc_info).log(level, record.getMessage())\n\nlogging.basicConfig(handlers=[InterceptHandler()], level=0, force=True)\n\nPersonalizable defaults through environment variables\nDon't like the default logger formatting? Would prefer another DEBUG color? No problem:\n# Linux / OSX\nexport LOGURU_FORMAT=\"{time} | <lvl>{message}</lvl>\"\n\n# Windows\nsetx LOGURU_DEBUG_COLOR \"<green>\"\n\nConvenient parser\nIt is often useful to extract specific information from generated logs, this is why Loguru provides a parse() method which helps to deal with logs and regexes.\npattern = r\"(?P<time>.*) - (?P<level>[0-9]+) - (?P<message>.*)\"  # Regex with named groups\ncaster_dict = dict(time=dateutil.parser.parse, level=int)        # Transform matching groups\n\nfor groups in logger.parse(\"file.log\", pattern, cast=caster_dict):\n    print(\"Parsed:\", groups)\n    # {\"level\": 30, \"message\": \"Log example\", \"time\": datetime(2018, 12, 09, 11, 23, 55)}\n\nExhaustive notifier\nLoguru can easily be combined with the great notifiers library (must be installed separately) to receive an e-mail when your program fail unexpectedly or to send many other kind of notifications.\nimport notifiers\n\nparams = {\n    \"username\": \"you@gmail.com\",\n    \"password\": \"abc123\",\n    \"to\": \"dest@gmail.com\"\n}\n\n# Send a single notification\nnotifier = notifiers.get_notifier(\"gmail\")\nnotifier.notify(message=\"The application is running!\", **params)\n\n# Be alerted on each error message\nfrom notifiers.logging import NotificationHandler\n\nhandler = NotificationHandler(\"gmail\", defaults=params)\nlogger.add(handler, level=\"ERROR\")\n\n\n10x faster than built-in logging\n\nAlthough logging impact on performances is in most cases negligible, a zero-cost logger would allow to use it anywhere without much concern. In an upcoming release, Loguru's critical functions will be implemented in C for maximum speed.\n\nDocumentation\n\nAPI Reference\nHelp & Guides\nType hints\nContributing\nLicense\nChangelog\n\n\n\n", "description": "Python logging made (stupidly) simple."}, {"name": "llvmlite", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nllvmlite\nA Lightweight LLVM Python Binding for Writing JIT Compilers\nWhy llvmlite\nKey Benefits\nCompatibility\nDocumentation\nPre-built binaries\nOther build methods\n\n\n\n\n\nREADME.rst\n\n\n\n\nllvmlite\n\n\n\n\n\n\n\nA Lightweight LLVM Python Binding for Writing JIT Compilers\nllvmlite is a project originally tailored for Numba's needs, using the\nfollowing approach:\n\nA small C wrapper around the parts of the LLVM C++ API we need that are\nnot already exposed by the LLVM C API.\nA ctypes Python wrapper around the C API.\nA pure Python implementation of the subset of the LLVM IR builder that we\nneed for Numba.\n\n\nWhy llvmlite\nThe old llvmpy  binding exposes a lot of LLVM APIs but the mapping of\nC++-style memory management to Python is error prone. Numba and many JIT\ncompilers do not need a full LLVM API.  Only the IR builder, optimizer,\nand JIT compiler APIs are necessary.\n\nKey Benefits\n\nThe IR builder is pure Python code and decoupled from LLVM's\nfrequently-changing C++ APIs.\nMaterializing a LLVM module calls LLVM's IR parser which provides\nbetter error messages than step-by-step IR building through the C++\nAPI (no more segfaults or process aborts).\nMost of llvmlite uses the LLVM C API which is small but very stable\n(low maintenance when changing LLVM version).\nThe binding is not a Python C-extension, but a plain DLL accessed using\nctypes (no need to wrestle with Python's compiler requirements and C++ 11\ncompatibility).\nThe Python binding layer has sane memory management.\nllvmlite is quite faster than llvmpy's thanks to a much simpler architeture\n(the Numba test suite is twice faster than it was).\n\n\nCompatibility\nllvmlite works with Python 3.8 and greater. We attempt to test with the latest\nPython version, this can be checked by looking at the public CI builds.\nAs of version 0.41.0, llvmlite requires LLVM 14.x.x on all architectures\nHistorical compatibility table:\n\n\nllvmlite versions\ncompatible LLVM versions\n\n\n\n0.41.0 - ...\n14.x.x\n\n0.40.0 - 0.40.1\n11.x.x and 14.x.x (12.x.x and 13.x.x untested but may work)\n\n0.37.0 - 0.39.1\n11.x.x\n\n0.34.0 - 0.36.0\n10.0.x (9.0.x for  aarch64 only)\n\n0.33.0\n9.0.x\n\n0.29.0 - 0.32.0\n7.0.x, 7.1.x, 8.0.x\n\n0.27.0 - 0.28.0\n7.0.x\n\n0.23.0 - 0.26.0\n6.0.x\n\n0.21.0 - 0.22.0\n5.0.x\n\n0.17.0 - 0.20.0\n4.0.x\n\n0.16.0 - 0.17.0\n3.9.x\n\n0.13.0 - 0.15.0\n3.8.x\n\n0.9.0 - 0.12.1\n3.7.x\n\n0.6.0 - 0.8.0\n3.6.x\n\n0.1.0 - 0.5.1\n3.5.x\n\n\n\n\nDocumentation\nYou'll find the documentation at http://llvmlite.pydata.org\n\nPre-built binaries\nWe recommend you use the binaries provided by the Numba team for\nthe Conda package manager.  You can find them in Numba's anaconda.org\nchannel.  For example:\n$ conda install --channel=numba llvmlite\n\n(or, simply, the official llvmlite package provided in the Anaconda\ndistribution)\n\nOther build methods\nIf you don't want to use our pre-built packages, you can compile\nand install llvmlite yourself.  The documentation will teach you how:\nhttp://llvmlite.pydata.org/en/latest/install/index.html\n\n\n", "description": "Lightweight LLVM python binding for writing JIT compilers."}, {"name": "librosa", "readme": "\n\nlibrosa\nA python package for music and audio analysis.\n\n\n\n\n\n\n\nTable of Contents\n\nDocumentation\nInstallation\n\nUsing PyPI\nUsing Anaconda\nBuilding From Source\nHints for Installation\n\nsoundfile\naudioread\n\nLinux (apt get)\nLinux (yum)\nMac\nWindows\n\n\n\n\n\n\nDiscussion\nCiting\n\n\nDocumentation\nSee https://librosa.org/doc/ for a complete reference manual and introductory tutorials.\nThe advanced example gallery should give you a quick sense of the kinds\nof things that librosa can do.\n\nBack To Top \u21a5\nInstallation\nUsing PyPI\nThe latest stable release is available on PyPI, and you can install it by saying\npython -m pip install librosa\n\nUsing Anaconda\nAnaconda users can install using conda-forge:\nconda install -c conda-forge librosa\n\nBuilding from source\nTo build librosa from source, say\npython setup.py build\n\nThen, to install librosa, say\npython setup.py install\n\nIf all went well, you should be able to execute the following commands from a python console:\nimport librosa\nlibrosa.show_versions()\n\nThis should print out a description of your software environment, along with the installed versions of other packages used by librosa.\n\ud83d\udcdd OS X users should follow the installation guide given below.\nAlternatively, you can download or clone the repository and use pip to handle dependencies:\nunzip librosa.zip\npython -m pip install -e librosa\n\nor\ngit clone https://github.com/librosa/librosa.git\npython -m pip install -e librosa\n\nBy calling pip list you should see librosa now as an installed package:\nlibrosa (0.x.x, /path/to/librosa)\n\n\nBack To Top \u21a5\nHints for the Installation\nlibrosa uses soundfile and audioread to load audio files.\n\ud83d\udcdd Note that older releases of soundfile (prior to 0.11) do not support MP3, which will cause librosa to fall back on the audioread library.\nsoundfile\nIf you're using conda to install librosa, then audio encoding dependencies will be handled automatically.\nIf you're using pip on a Linux environment, you may need to install libsndfile\nmanually.  Please refer to the SoundFile installation documentation for details.\naudioread and MP3 support\nTo fuel audioread with more audio-decoding power (e.g., for reading MP3 files),\nyou may need to install either ffmpeg or GStreamer.\n\ud83d\udcddNote that on some platforms, audioread needs at least one of the programs to work properly.\nIf you are using Anaconda, install ffmpeg by calling\nconda install -c conda-forge ffmpeg\n\nIf you are not using Anaconda, here are some common commands for different operating systems:\n\n\nLinux (apt-get):\n\n\napt-get install ffmpeg\n\nor\napt-get install gstreamer1.0-plugins-base gstreamer1.0-plugins-ugly\n\n\n\nLinux (yum):\n\n\nyum install ffmpeg\n\nor\nyum install gstreamer1.0-plugins-base gstreamer1.0-plugins-ugly\n\n\n\nMac:\n\n\nbrew install ffmpeg\n\nor\nbrew install gstreamer\n\n\n\nWindows:\n\n\ndownload ffmpeg binaries from this website or gstreamer binaries from this website\nFor GStreamer, you also need to install the Python bindings with\npython -m pip install pygobject\n\n\nBack To Top \u21a5\nDiscussion\nPlease direct non-development questions and discussion topics to our web forum at\nhttps://groups.google.com/forum/#!forum/librosa\n\nBack To Top \u21a5\nCiting\nIf you want to cite librosa in a scholarly work, there are two ways to do it.\n\n\nIf you are using the library for your work, for the sake of reproducibility, please cite\nthe version you used as indexed at Zenodo:\n\n\n\nIf you wish to cite librosa for its design, motivation, etc., please cite the paper\npublished at SciPy 2015:\nMcFee, Brian, Colin Raffel, Dawen Liang, Daniel PW Ellis, Matt McVicar, Eric Battenberg, and Oriol Nieto. \"librosa: Audio and music signal analysis in python.\" In Proceedings of the 14th python in science conference, pp. 18-25. 2015.\n\n\n\nBack To Top \u21a5\n", "description": "Audio and music analysis library.", "category": "Audio"}, {"name": "korean-lunar-calendar", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nkorean_lunar_calendar\nOverview\nDocs\nInstall\nImport\nExample\nValidation\nOther languages\n\n\n\n\n\nREADME.md\n\n\n\n\nkorean_lunar_calendar\n\nLibrary to convert Korean lunar-calendar to Gregorian calendar.\n\nOverview\nKorean calendar and Chinese calendar is same lunar calendar but have different date.\nThis follow the KARI(Korea Astronomy and Space Science Institute)\n\ud55c\uad6d \uc591\uc74c\ub825 \ubcc0\ud658 (\ud55c\uad6d\ucc9c\ubb38\uc5f0\uad6c\uc6d0 \uae30\uc900) - \ub124\ud2b8\uc6cc\ud06c \uc5f0\uacb0 \ubd88\ud544\uc694\n\uc74c\ub825 \uc9c0\uc6d0 \ubc94\uc704 (1000\ub144 01\uc6d4 01\uc77c ~ 2050\ub144 11\uc6d4 18\uc77c)\nKorean Lunar Calendar (1000-01-01 ~ 2050-11-18)\n\n\uc591\ub825 \uc9c0\uc6d0 \ubc94\uc704 (1000\ub144 02\uc6d4 13\uc77c ~ 2050\ub144 12\uc6d4 31\uc77c)\nGregorian Calendar (1000-02-13 ~ 2050-12-31)\n\nExample Site\nDocs\n\nInstall\nImport\nExample\nValidation\nOther languages\n\nInstall\npip install korean_lunar_calendar\nImport\nfrom korean_lunar_calendar import KoreanLunarCalendar\nExample\nKorean Solar Date -> Korean Lunar Date (\uc591\ub825 -> \uc74c\ub825)\ncalendar = KoreanLunarCalendar()\n\n# params : year(\ub144), month(\uc6d4), day(\uc77c)\ncalendar.setSolarDate(2017, 6, 24)\n\n# Lunar Date (ISO Format)\nprint(calendar.LunarIsoFormat())\n\n# Korean GapJa String\nprint(calendar.getGapJaString())\n\n# Chinese GapJa String\nprint(calendar.getChineseGapJaString())\n[Result]\n2017-05-01 Intercalation\n\uc815\uc720\ub144 \ubcd1\uc624\uc6d4 \uc784\uc624\uc77c (\uc724\uc6d4)\n\u4e01\u9149\u5e74 \u4e19\u5348\u6708 \u58ec\u5348\u65e5 (\u958f\u6708)\n\nKorean Lunar Date -> Korean Solar Date (\uc74c\ub825 -> \uc591\ub825)\ncalendar = KoreanLunarCalendar()\n\n# params : year(\ub144), month(\uc6d4), day(\uc77c), intercalation(\uc724\ub2ec\uc5ec\ubd80)\ncalendar.setLunarDate(1956, 1, 21, False)\n\n# Solar Date (ISO Format)\nprint(calendar.SolarIsoFormat())\n\n# Korean GapJa String\nprint(calendar.getGapJaString())\n\n# Chinese GapJa String\nprint(calendar.getChineseGapJaString())\n[Result]\n1956-03-03\n\ubcd1\uc2e0\ub144 \uacbd\uc778\uc6d4 \uae30\uc0ac\uc77c\n\u4e19\u7533\u5e74 \u5e9a\u5bc5\u6708 \u5df1\u5df3\u65e5\n\nValidation\nCheck for invalid date input\ncalendar = KoreanLunarCalendar()\n\n# invald date\ncalendar.setLunarDate(99, 1, 1, False) # => return False\ncalendar.setSolarDate(2051, 1, 1) # => return False\n\n# OK\ncalendar.setLunarDate(1000, 1, 1, False) # => return True\ncalendar.setSolarDate(2050, 12, 31) # => return True\nOther languages\n\nJava : https://github.com/usingsky/KoreanLunarCalendar\nPython : https://github.com/usingsky/korean_lunar_calendar_py\nJavascript : https://github.com/usingsky/korean_lunar_calendar_js\n\n\n\n"}, {"name": "kiwisolver", "readme": "\n\n\n\n\n\nKiwi is an efficient C++ implementation of the Cassowary constraint solving\nalgorithm. Kiwi is an implementation of the algorithm based on the\nseminal Cassowary paper.\nIt is not a refactoring of the original C++ solver. Kiwi has been designed\nfrom the ground up to be lightweight and fast. Kiwi ranges from 10x to 500x\nfaster than the original Cassowary solver with typical use cases gaining a 40x\nimprovement. Memory savings are consistently > 5x.\nIn addition to the C++ solver, Kiwi ships with hand-rolled Python bindings for\nPython 3.7+.\n", "description": "Implementation of Cassowary constraint solver algorithm."}, {"name": "kerykeion", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nKerykeion\nWeb API\nDonate\nInstallation\nUsage\nGenerate a SVG Chart:\nReport\nOther exeples of possibles usecase\nDocumentation\nDevelopment\nContributing\n\n\n\n\n\nREADME.md\n\n\n\n\nKerykeion\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\u00a0\nKerykeion is a python library for Astrology.\nIt can calculate all the planet and house position,\nalso it can calculate the aspects of a single persone or between two, you can set how many planets you\nneed in the settings in the utility module.\nIt also can generate an SVG of a birthchart, a synastry chart or a transit chart.\nHere's an example of a birthchart:\n\nWeb API\nIf you want to use Kerykeion in a web application, I've created a web API for this purpose, you can find it here:\nAstrologerAPI\nIt's open source, it's a way to support me and the project.\nDonate\nMaintaining this project is a lot of work, the Astrologer API doesn't nearly cover the costs of a software engineer working on this project full time. I do this because I love it, but until I can make this my full time job, I won't be able to spend as much time on it.\nIf you want to support me, you can do it here:\n\nInstallation\nKerykeion is a Python 3.9 package, make sure you have Python 3.9 or above installed on your system.\npip3 install kerykeion\nUsage\nHere some examples:\n# Import the main class for creating a kerykeion instance:\nfrom kerykeion import AstrologicalSubject\n\n# Create a kerykeion instance:\n# Args: Name, year, month, day, hour, minuts, city, nation(optional)\nkanye = AstrologicalSubject(\"Kanye\", 1977, 6, 8, 8, 45, \"Atlanta\")\n\n# Get the information about the sun in the chart:\n# (The position of the planets always starts at 0)\nkanye.sun\n\n#> {'name': 'Sun', 'quality': 'Mutable', 'element': 'Air', 'sign': 'Gem', 'sign_num': 2, 'pos': 17.598992059774275, 'abs_pos': 77.59899205977428, 'emoji': '\u264a\ufe0f', 'house': '12th House', 'retrograde': False}\n\n# Get information about the first house:\nkanye.first_house\n\n#> {'name': 'First_House', 'quality': 'Cardinal', 'element': 'Water', 'sign': 'Can', 'sign_num': 3, 'pos': 17.995779673209114, 'abs_pos': 107.99577967320911, 'emoji': '\u264b\ufe0f'}\n\n# Get element of the moon sign:\nkanye.moon.element\n\n#> 'Water'\nTo avoid connecting to GeoNames (eg. avoiding hourly limit or no internet connection) you should instance kerykeion like this:\nkanye = AstrologicalSubject(\n    \"Kanye\", 1977, 6, 8, 8, 45, lng=50, lat=50, tz_str=\"Europe/Rome\", city=\"Rome\"\n)\nThe difference is that you have to pass the longitude, latitude and the timezone string, instead of the city and nation.\nIf you omit the nation, it will be set to \"GB\" by default, but the value is not used for calculations. It's better to set it to the correct value though.\nGenerate a SVG Chart:\nfrom kerykeion import AstrologicalSubject, KerykeionChartSVG\n\nfirst = AstrologicalSubject(\"Jack\", 1990, 6, 15, 15, 15, \"Roma\")\nsecond = AstrologicalSubject(\"Jane\", 1991, 10, 25, 21, 00, \"Roma\")\n\n# Set the type, it can be Natal, Synastry or Transit\n\nname = KerykeionChartSVG(first, chart_type=\"Synastry\", second_obj=second)\nname.makeSVG()\nprint(len(name.aspects_list))\n\n#> Generating kerykeion object for Jack...\n#> Generating kerykeion object for Jane...\n#> Jack birth location: Roma, 41.89193, 12.51133\n#> SVG Generated Correctly\n#> 38\n\nReport\nTo print a report of all the data:\nfrom kerykeion import Report, AstrologicalSubject\n\nkanye = AstrologicalSubject(\"Kanye\", 1977, 6, 8, 8, 45, \"Atlanta\")\nreport = Report(kanye)\nreport.print_report()\nReturns:\n+- Kerykeion report for Kanye -+\n+----------+------+-------------+-----------+----------+\n| Date     | Time | Location    | Longitude | Latitude |\n+----------+------+-------------+-----------+----------+\n| 8/6/1977 | 8:45 | Atlanta, US | -84.38798 | 33.749   |\n+----------+------+-------------+-----------+----------+\n+-----------+------+-------+------+----------------+\n| Planet    | Sign | Pos.  | Ret. | House          |\n+-----------+------+-------+------+----------------+\n| Sun       | Gem  | 17.6  | -    | Twelfth_House  |\n| Moon      | Pis  | 16.43 | -    | Ninth_House    |\n| Mercury   | Tau  | 26.29 | -    | Eleventh_House |\n| Venus     | Tau  | 2.03  | -    | Tenth_House    |\n| Mars      | Tau  | 1.79  | -    | Tenth_House    |\n| Jupiter   | Gem  | 14.61 | -    | Eleventh_House |\n| Saturn    | Leo  | 12.8  | -    | Second_House   |\n| Uranus    | Sco  | 8.27  | R    | Fourth_House   |\n| Neptune   | Sag  | 14.69 | R    | Fifth_House    |\n| Pluto     | Lib  | 11.45 | R    | Fourth_House   |\n| Mean_Node | Lib  | 21.49 | R    | Fourth_House   |\n| True_Node | Lib  | 22.82 | R    | Fourth_House   |\n| Chiron    | Tau  | 4.17  | -    | Tenth_House    |\n+-----------+------+-------+------+----------------+\n+----------------+------+----------+\n| House          | Sign | Position |\n+----------------+------+----------+\n| First_House    | Can  | 18.0     |\n| Second_House   | Leo  | 9.51     |\n| Third_House    | Vir  | 4.02     |\n| Fourth_House   | Lib  | 3.98     |\n| Fifth_House    | Sco  | 9.39     |\n| Sixth_House    | Sag  | 15.68    |\n| Seventh_House  | Cap  | 18.0     |\n| Eighth_House   | Aqu  | 9.51     |\n| Ninth_House    | Pis  | 4.02     |\n| Tenth_House    | Ari  | 3.98     |\n| Eleventh_House | Tau  | 9.39     |\n| Twelfth_House  | Gem  | 15.68    |\n+----------------+------+----------+\n\n\nAnd if you want to export it to a file:\n$ python3 your_script_name.py > file.txt\nOther exeples of possibles usecase\n# Get all aspects between two persons:\n\nfrom kerykeion import SynastryAspects, AstrologicalSubject\nfirst = AstrologicalSubject(\"Jack\", 1990, 6, 15, 15, 15, \"Roma\")\nsecond = AstrologicalSubject(\"Jane\", 1991, 10, 25, 21, 00, \"Roma\")\n\nname = SynastryAspects(first, second)\naspect_list = name.get_relevant_aspects()\nprint(aspect_list[0])\n\n#> Generating kerykeion object for Jack...\n#> Generating kerykeion object for Jane...\n#> {'p1_name': 'Sun', 'p1_abs_pos': 84.17867971515636, 'p2_name': 'Sun', 'p2_abs_pos': 211.90472999502984, 'aspect': 'trine', 'orbit': 7.726050279873476, 'aspect_degrees': 120, 'color': '#36d100', 'aid': 6, 'diff': 127.72605027987348, 'p1': 0, 'p2': 0}\nDocumentation\nMost of the functions and the classes are self documented by the types and have docstrings.\nAn auto-generated documentation is available here.\nSooner or later I'll try to write an extensive documentation.\nDevelopment\nYou can clone this repository or download a zip file using the right side buttons.\nContributing\nFeel free to contribute to the code!\n\n\n", "description": "Python library for astrology and generating astrology charts.", "category": "Astrology"}, {"name": "keras", "readme": "\nKeras is a deep learning API written in Python,\nrunning on top of the machine learning platform TensorFlow.\nIt was developed with a focus on enabling fast experimentation and\nproviding a delightful developer experience.\nThe purpose of Keras is to give an unfair advantage to any developer\nlooking to ship ML-powered apps.\nKeras is:\n\nSimple \u2013 but not simplistic. Keras reduces developer cognitive load\nto free you to focus on the parts of the problem that really matter.\nKeras focuses on ease of use, debugging speed, code elegance & conciseness,\nmaintainability, and deployability (via TFServing, TFLite, TF.js).\nFlexible \u2013 Keras adopts the principle of progressive disclosure of\ncomplexity: simple workflows should be quick and easy, while arbitrarily\nadvanced workflows should be possible via a clear path that builds upon\nwhat you\u2019ve already learned.\nPowerful \u2013 Keras provides industry-strength performance and\nscalability: it is used by organizations and companies including NASA,\nYouTube, and Waymo. That\u2019s right \u2013 your YouTube recommendations are\npowered by Keras, and so is the world\u2019s most advanced driverless vehicle.\n\n", "description": "Deep learning API for TensorFlow and other backends."}, {"name": "jupyterlab", "readme": "\nInstallation |\nDocumentation |\nContributing |\nLicense |\nTeam |\nGetting help |\nJupyterLab\n\n\n\n\n\n\n\n\n\n\n\nAn extensible environment for interactive and reproducible computing, based on the\nJupyter Notebook and Architecture.\nJupyterLab is the next-generation user interface for Project Jupyter offering\nall the familiar building blocks of the classic Jupyter Notebook (notebook,\nterminal, text editor, file browser, rich outputs, etc.) in a flexible and\npowerful user interface.\nJupyterLab can be extended using npm packages\nthat use our public APIs. The prebuilt extensions can be distributed\nvia PyPI,\nconda, and other package managers. The source extensions can be installed\ndirectly from npm (search for jupyterlab-extension) but require an additional build step.\nYou can also find JupyterLab extensions exploring GitHub topic jupyterlab-extension.\nTo learn more about extensions, see the user documentation.\nRead the current JupyterLab documentation on ReadTheDocs.\n\nGetting started\nInstallation\nIf you use conda, mamba, or pip, you can install JupyterLab with one of the following commands.\n\nIf you use conda:\nconda install -c conda-forge jupyterlab\n\n\nIf you use mamba:\nmamba install -c conda-forge jupyterlab\n\n\nIf you use pip:\npip install jupyterlab\n\nIf installing using pip install --user, you must add the user-level bin directory to your PATH environment variable in order to launch jupyter lab. If you are using a Unix derivative (e.g., FreeBSD, GNU/Linux, macOS), you can do this by running export PATH=\"$HOME/.local/bin:$PATH\". If you are using a macOS version that comes with Python 2, run pip3 instead of pip.\n\nFor more detailed instructions, consult the installation guide. Project installation instructions from the git sources are available in the contributor documentation.\nInstalling with Previous Versions of Jupyter Notebook\nWhen using a version of Jupyter Notebook earlier than 5.3, the following command must be run after installing JupyterLab to enable the JupyterLab server extension:\njupyter serverextension enable --py jupyterlab --sys-prefix\n\nRunning\nStart up JupyterLab using:\njupyter lab\n\nJupyterLab will open automatically in the browser. See the documentation for additional details.\nIf you encounter an error like \"Command 'jupyter' not found\", please make sure PATH environment variable is set correctly. Alternatively, you can start up JupyterLab using ~/.local/bin/jupyter lab without changing the PATH environment variable.\nPrerequisites and Supported Browsers\nThe latest versions of the following browsers are currently known to work:\n\nFirefox\nChrome\nSafari\n\nSee our documentation for additional details.\n\nGetting help\nWe encourage you to ask questions on the Discourse forum. A question answered there can become a useful resource for others.\nBug report\nTo report a bug please read the guidelines and then open a Github issue. To keep resolved issues self-contained, the lock bot will lock closed issues as resolved after a period of inactivity. If a related discussion is still needed after an issue is locked, please open a new issue and reference the old issue.\nFeature request\nWe also welcome suggestions for new features as they help make the project more useful for everyone. To request a feature please use the feature request template.\n\nDevelopment\nExtending JupyterLab\nTo start developing an extension for JupyterLab, see the developer documentation and the API docs.\nContributing\nTo contribute code or documentation to JupyterLab itself, please read the contributor documentation.\nJupyterLab follows the Jupyter Community Guides.\nLicense\nJupyterLab uses a shared copyright model that enables all contributors to maintain the\ncopyright on their contributions. All code is licensed under the terms of the revised BSD license.\nTeam\nJupyterLab is part of Project Jupyter and is developed by an open community. The maintenance team is assisted by a much larger group of contributors to JupyterLab and Project Jupyter as a whole.\nJupyterLab's current maintainers are listed in alphabetical order, with affiliation, and main areas of contribution:\n\nMehmet Bektas, Netflix (general development, extensions).\nAlex Bozarth, IBM (general development, extensions).\nEric Charles, Datalayer, (general development, extensions).\nFr\u00e9d\u00e9ric Collonval, QuantStack (general development, extensions).\nMartha Cryan, IBM (general development, extensions).\nAfshin Darian, QuantStack (co-creator, application/high-level architecture,\nprolific contributions throughout the code base).\nVidar T. Fauske, JPMorgan Chase (general development, extensions).\nBrian Granger, AWS (co-creator, strategy, vision, management, UI/UX design,\narchitecture).\nJason Grout, Databricks (co-creator, vision, general development).\nMicha\u0142 Krassowski, University of Oxford (general development, extensions).\nMax Klein, JPMorgan Chase (UI Package, build system, general development, extensions).\nGonzalo Pe\u00f1a-Castellanos, QuanSight (general development, i18n, extensions).\nFernando Perez, UC Berkeley (co-creator, vision).\nIsabela Presedo-Floyd, QuanSight Labs (design/UX).\nSteven Silvester, MongoDB (co-creator, release management, packaging,\nprolific contributions throughout the code base).\nJeremy Tuloup, QuantStack (general development, extensions).\n\nMaintainer emeritus:\n\nChris Colbert, Project Jupyter (co-creator, application/low-level architecture,\ntechnical leadership, vision, PhosphorJS)\nJessica Forde, Project Jupyter (demo, documentation)\nTim George, Cal Poly (UI/UX design, strategy, management, user needs analysis).\nCameron Oelsen, Cal Poly (UI/UX design).\nIan Rose, Quansight/City of LA (general core development, extensions).\nAndrew Schlaepfer, Bloomberg (general development, extensions).\nSaul Shanabrook, Quansight (general development, extensions)\n\nThis list is provided to give the reader context on who we are and how our team functions.\nTo be listed, please submit a pull request with your information.\n\nWeekly Dev Meeting\nWe have videoconference meetings every week where we discuss what we have been working on and get feedback from one another.\nAnyone is welcome to attend, if they would like to discuss a topic or just listen in.\n\nWhen: Wednesdays 9:00 AM Pacific Time (USA)\nWhere: jovyan Zoom\nWhat: Meeting notes\n\n\nNotes are archived on GitHub JupyterLab team compass.\n\n", "description": "pygments - JupyterLab syntax highlighting theme for Pygments."}, {"name": "jupyterlab-server", "readme": "\n\n\n\n\n\n\n\n\n\n\n\njupyterlab server\nMotivation\nInstall\nUsage\nExtending the Application\nContribution\n\n\n\n\n\nREADME.md\n\n\n\n\njupyterlab server\n\n\nMotivation\nJupyterLab Server sits between JupyterLab and Jupyter Server, and provides a\nset of REST API handlers and utilities that are used by JupyterLab. It is a separate project in order to\naccommodate creating JupyterLab-like applications from a more limited scope.\nInstall\npip install jupyterlab_server\nTo include optional openapi dependencies, use:\npip install jupyterlab_server[openapi]\nTo include optional pytest_plugin dependencies, use:\npip install jupyterlab_server[test]\nUsage\nSee the full documentation for API docs and REST endpoint descriptions.\nExtending the Application\nSubclass the LabServerApp and provide additional traits and handlers as appropriate for your application.\nContribution\nPlease see CONTRIBUTING.md for details.\n\n\n"}, {"name": "jupyterlab-pygments", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nJupyterLab Pygments Theme\nScreencast\nInstallation\nDependencies\nLimitations\nLicense\n\n\n\n\n\nREADME.md\n\n\n\n\nJupyterLab Pygments Theme\nThis package contains a syntax coloring theme for pygments making use of\nthe JupyterLab CSS variables.\nThe goal is to enable the use of JupyterLab's themes with pygments-generated HTML.\nScreencast\nIn the following screencast, we demonstrate how Pygments-highlighted code can make use of the JupyterLab theme.\n\nInstallation\njupyterlab_pygments can be installed with the conda package manager\nconda install -c conda-forge jupyterlab_pygments\n\nor from pypi\npip install jupyterlab_pygments\n\nDependencies\n\njupyterlab_pygments requires pygments version 2.4.1.\nThe CSS variables used by the theme correspond to the CodeMirror syntex coloring\ntheme defined in the NPM package @jupyterlab/codemirror. Supported versions for @jupyterlab/codemirror's CSS include 0.19.1, ^1.0, and, ^2.0.\n\nLimitations\nPygments-generated HTML and CSS classes are not granular enough to reproduce\nall of the details of codemirror (the JavaScript text editor used by JupyterLab).\nThis includes the ability to differentiate properties from general names.\nLicense\njupyterlab_pygments uses a shared copyright model that enables all contributors to maintain the\ncopyright on their contributions. All code is licensed under the terms of the revised BSD license.\n\n\n"}, {"name": "jupyter-server", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nJupyter Server\nInstallation and Basic usage\nVersioning and Branches\nUsage - Running Jupyter Server\nRunning in a local installation\nTesting\nContributing\nTeam Meetings and Roadmap\nAbout the Jupyter Development Team\nOur Copyright Policy\n\n\n\n\n\nREADME.md\n\n\n\n\nJupyter Server\n\n\nThe Jupyter Server provides the backend (i.e. the core services, APIs, and REST endpoints) for Jupyter web applications like Jupyter notebook, JupyterLab, and Voila.\nFor more information, read our documentation here.\nInstallation and Basic usage\nTo install the latest release locally, make sure you have\npip installed and run:\npip install jupyter_server\n\nJupyter Server currently supports Python>=3.6 on Linux, OSX and Windows.\nVersioning and Branches\nIf Jupyter Server is a dependency of your project/application, it is important that you pin it to a version that works for your application. Currently, Jupyter Server only has minor and patch versions. Different minor versions likely include API-changes while patch versions do not change API.\nWhen a new minor version is released on PyPI, a branch for that version will be created in this repository, and the version of the main branch will be bumped to the next minor version number. That way, the main branch always reflects the latest un-released version.\nTo see the changes between releases, checkout the CHANGELOG.\nUsage - Running Jupyter Server\nRunning in a local installation\nLaunch with:\njupyter server\n\nTesting\nSee CONTRIBUTING.\nContributing\nIf you are interested in contributing to the project, see CONTRIBUTING.rst.\nTeam Meetings and Roadmap\n\nWhen: Thursdays 8:00am, Pacific time\nWhere: Jovyan Zoom\nWhat: Meeting notes\n\nSee our tentative roadmap here.\nAbout the Jupyter Development Team\nThe Jupyter Development Team is the set of all contributors to the Jupyter project.\nThis includes all of the Jupyter subprojects.\nThe core team that coordinates development on GitHub can be found here:\nhttps://github.com/jupyter/.\nOur Copyright Policy\nJupyter uses a shared copyright model. Each contributor maintains copyright\nover their contributions to Jupyter. But, it is important to note that these\ncontributions are typically only changes to the repositories. Thus, the Jupyter\nsource code, in its entirety is not the copyright of any single person or\ninstitution. Instead, it is the collective copyright of the entire Jupyter\nDevelopment Team. If individual contributors want to maintain a record of what\nchanges/contributions they have specific copyright on, they should indicate\ntheir copyright in the commit message of the change, when they commit the\nchange to one of the Jupyter repositories.\nWith this in mind, the following banner should be used in any source code file\nto indicate the copyright and license terms:\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\n\n\n"}, {"name": "jupyter-core", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nJupyter Core\nDevelopment Setup\nCoding\nCode Styling\nDocumentation\nAbout the Jupyter Development Team\nOur Copyright Policy\n\n\n\n\n\nREADME.md\n\n\n\n\nJupyter Core\n\n\nCore common functionality of Jupyter projects.\nThis package contains base application classes and configuration inherited by other projects.\nIt doesn't do much on its own.\nDevelopment Setup\nThe Jupyter Contributor Guides provide extensive information on contributing code or documentation to Jupyter projects. The limited instructions below for setting up a development environment are for your convenience.\nCoding\nYou'll need Python and pip on the search path. Clone the Jupyter Core git repository to your computer, for example in /my/projects/jupyter_core.\nNow create an editable install\nand download the dependencies of code and test suite by executing:\ncd /my/projects/jupyter_core/\npip install -e \".[test]\"\npy.test\n\nThe last command runs the test suite to verify the setup. During development, you can pass filenames to py.test, and it will execute only those tests.\nCode Styling\njupyter_core has adopted automatic code formatting so you shouldn't\nneed to worry too much about your code style.\nAs long as your code is valid,\nthe pre-commit hook should take care of how it should look.\npre-commit and its associated hooks will automatically be installed when\nyou run pip install -e \".[test]\"\nTo install pre-commit manually, run the following:\n    pip install pre-commit\n    pre-commit install\nYou can invoke the pre-commit hook by hand at any time with:\n    pre-commit run\nwhich should run any autoformatting on your code\nand tell you about any errors it couldn't fix automatically.\nYou may also install black integration\ninto your text editor to format code automatically.\nIf you have already committed files before setting up the pre-commit\nhook with pre-commit install, you can fix everything up using\npre-commit run --all-files. You need to make the fixing commit\nyourself after that.\nDocumentation\nThe documentation of Jupyter Core is generated from the files in docs/ using Sphinx. Instructions for setting up Sphinx with a selection of optional modules are in the Documentation Guide. You'll also need the make command.\nFor a minimal Sphinx installation to process the Jupyter Core docs, execute:\npip install sphinx\n\nThe following commands build the documentation in HTML format and check for broken links:\ncd /my/projects/jupyter_core/docs/\nmake html linkcheck\n\nPoint your browser to the following URL to access the generated documentation:\nfile:///my/projects/jupyter_core/docs/_build/html/index.html\nAbout the Jupyter Development Team\nThe Jupyter Development Team is the set of all contributors to the Jupyter\nproject. This includes all of the Jupyter subprojects. A full list with\ndetails is kept in the documentation directory, in the file\nabout/credits.txt.\nThe core team that coordinates development on GitHub can be found here:\nhttps://github.com/ipython/.\nOur Copyright Policy\nJupyter uses a shared copyright model. Each contributor maintains copyright\nover their contributions to Jupyter. It is important to note that these\ncontributions are typically only changes to the repositories. Thus, the Jupyter\nsource code in its entirety is not the copyright of any single person or\ninstitution. Instead, it is the collective copyright of the entire Jupyter\nDevelopment Team. If individual contributors want to maintain a record of what\nchanges/contributions they have specific copyright on, they should indicate\ntheir copyright in the commit message of the change, when they commit the\nchange to one of the Jupyter repositories.\nWith this in mind, the following banner should be used in any source code file\nto indicate the copyright and license terms:\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\n\n\n"}, {"name": "jupyter-client", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nJupyter Client\nDevelopment Setup\nCoding\nDocumentation\nContributing\nAbout the Jupyter Development Team\nOur Copyright Policy\n\n\n\n\n\nREADME.md\n\n\n\n\nJupyter Client\n\n\njupyter_client contains the reference implementation of the Jupyter protocol.\nIt also provides client and kernel management APIs for working with kernels.\nIt also provides the jupyter kernelspec entrypoint\nfor installing kernelspecs for use with Jupyter frontends.\nDevelopment Setup\nThe Jupyter Contributor Guides provide extensive information on contributing code or documentation to Jupyter projects. The limited instructions below for setting up a development environment are for your convenience.\nCoding\nYou'll need Python and pip on the search path. Clone the Jupyter Client git repository to your computer, for example in /my/project/jupyter_client\ncd /my/projects/\ngit clone git@github.com:jupyter/jupyter_client.git\nNow create an editable install\nand download the dependencies of code and test suite by executing:\ncd /my/projects/jupyter_client/\npip install -e \".[test]\"\npytest\nThe last command runs the test suite to verify the setup. During development, you can pass filenames to pytest, and it will execute only those tests.\nDocumentation\nThe documentation of Jupyter Client is generated from the files in docs/ using Sphinx. Instructions for setting up Sphinx with a selection of optional modules are in the Documentation Guide. You'll also need the make command.\nFor a minimal Sphinx installation to process the Jupyter Client docs, execute:\npip install \".[doc]\"\nThe following commands build the documentation in HTML format and check for broken links:\ncd /my/projects/jupyter_client/docs/\nmake html linkcheck\nPoint your browser to the following URL to access the generated documentation:\nfile:///my/projects/jupyter_client/docs/_build/html/index.html\nContributing\njupyter-client has adopted automatic code formatting so you shouldn't\nneed to worry too much about your code style.\nAs long as your code is valid,\nthe pre-commit hook should take care of how it should look.\nYou can invoke the pre-commit hook by hand at any time with:\npre-commit run\nwhich should run any autoformatting on your code\nand tell you about any errors it couldn't fix automatically.\nYou may also install black integration\ninto your text editor to format code automatically.\nIf you have already committed files before setting up the pre-commit\nhook with pre-commit install, you can fix everything up using\npre-commit run --all-files. You need to make the fixing commit\nyourself after that.\nSome of the hooks only run on CI by default, but you can invoke them by\nrunning with the --hook-stage manual argument.\nAbout the Jupyter Development Team\nThe Jupyter Development Team is the set of all contributors to the Jupyter project.\nThis includes all of the Jupyter subprojects.\nThe core team that coordinates development on GitHub can be found here:\nhttps://github.com/jupyter/.\nOur Copyright Policy\nJupyter uses a shared copyright model. Each contributor maintains copyright\nover their contributions to Jupyter. But, it is important to note that these\ncontributions are typically only changes to the repositories. Thus, the Jupyter\nsource code, in its entirety is not the copyright of any single person or\ninstitution. Instead, it is the collective copyright of the entire Jupyter\nDevelopment Team. If individual contributors want to maintain a record of what\nchanges/contributions they have specific copyright on, they should indicate\ntheir copyright in the commit message of the change, when they commit the\nchange to one of the Jupyter repositories.\nWith this in mind, the following banner should be used in any source code file\nto indicate the copyright and license terms:\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\n\n\n"}, {"name": "jsonschema", "readme": "\n     \njsonschema is an implementation of the JSON Schema specification for Python.\n>>> from jsonschema import validate\n\n>>> # A sample schema, like what we'd get from json.load()\n>>> schema = {\n...     \"type\" : \"object\",\n...     \"properties\" : {\n...         \"price\" : {\"type\" : \"number\"},\n...         \"name\" : {\"type\" : \"string\"},\n...     },\n... }\n\n>>> # If no exception is raised by validate(), the instance is valid.\n>>> validate(instance={\"name\" : \"Eggs\", \"price\" : 34.99}, schema=schema)\n\n>>> validate(\n...     instance={\"name\" : \"Eggs\", \"price\" : \"Invalid\"}, schema=schema,\n... )                                   # doctest: +IGNORE_EXCEPTION_DETAIL\nTraceback (most recent call last):\n    ...\nValidationError: 'Invalid' is not of type 'number'\nIt can also be used from the command line by installing check-jsonschema.\n\nFeatures\n\nFull support for Draft 2020-12, Draft 2019-09, Draft 7, Draft 6, Draft 4 and Draft 3\nLazy validation that can iteratively report all validation errors.\nProgrammatic querying of which properties or items failed validation.\n\n\n\nInstallation\njsonschema is available on PyPI. You can install using pip:\n$ pip install jsonschema\n\nExtras\nTwo extras are available when installing the package, both currently related to format validation:\n\n\nformat\nformat-nongpl\n\n\nThey can be used when installing in order to include additional dependencies, e.g.:\n$ pip install jsonschema'[format]'\nBe aware that the mere presence of these dependencies \u2013 or even the specification of format checks in a schema \u2013 do not activate format checks (as per the specification).\nPlease read the format validation documentation for further details.\n\n\n\nAbout\nI\u2019m Julian Berman.\njsonschema is on GitHub.\nGet in touch, via GitHub or otherwise, if you\u2019ve got something to contribute, it\u2019d be most welcome!\nYou can also generally find me on Libera (nick: Julian) in various channels, including #python.\nIf you feel overwhelmingly grateful, you can also sponsor me.\nAnd for companies who appreciate jsonschema and its continued support and growth, jsonschema is also now supportable via TideLift.\n\n\nRelease Information\nv4.19.0\n\nImporting the Validator protocol directly from the package root is deprecated.\nImport it from jsonschema.protocols.Validator instead.\nAutomatic retrieval of remote references (which is still deprecated) now properly succeeds even if the retrieved resource does not declare which version of JSON Schema it uses.\nSuch resources are assumed to be 2020-12 schemas.\nThis more closely matches the pre-referencing library behavior.\n\n\n", "description": "specifications - JSON files for JSON Schema metaschemas and vocabularies."}, {"name": "jsonschema-specifications", "readme": "\n   \nJSON support files from the JSON Schema Specifications (metaschemas, vocabularies, etc.), packaged for runtime access from Python as a referencing-based Schema Registry.\n"}, {"name": "jsonpickle", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\njsonpickle\nWhy jsonpickle?\nSecurity\nInstall\nNumpy Support\nPandas Support\njsonpickleJS\nLicense\nDevelopment\nGPG Signing\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\n\n\n\n\n\n\njsonpickle\njsonpickle is a library for the two-way conversion of complex Python objects\nand JSON.  jsonpickle builds upon the existing JSON\nencoders, such as simplejson, json, and ujson.\n\nWarning\njsonpickle can execute arbitrary Python code.\nPlease see the Security section for more details.\n\nFor complete documentation, please visit the\njsonpickle documentation.\nBug reports and merge requests are encouraged at the\njsonpickle repository on github.\n\nWhy jsonpickle?\nData serialized with python's pickle (or cPickle or dill) is not easily readable outside of python. Using the json format, jsonpickle allows simple data types to be stored in a human-readable format, and more complex data types such as numpy arrays and pandas dataframes, to be machine-readable on any platform that supports json. E.g., unlike pickled data, jsonpickled data stored in an Amazon S3 bucket is indexible by Amazon's Athena.\n\nSecurity\njsonpickle should be treated the same as the\nPython stdlib pickle module\nfrom a security perspective.\n\nWarning\nThe jsonpickle module is not secure.  Only unpickle data you trust.\nIt is possible to construct malicious pickle data which will execute\narbitrary code during unpickling.  Never unpickle data that could have come\nfrom an untrusted source, or that could have been tampered with.\nConsider signing data with an HMAC if you need to ensure that it has not\nbeen tampered with.\nSafer deserialization approaches, such as reading JSON directly,\nmay be more appropriate if you are processing untrusted data.\n\n\nInstall\nInstall from pip for the latest stable release:\npip install jsonpickle\n\nInstall from github for the latest changes:\npip install git+https://github.com/jsonpickle/jsonpickle.git\n\nIf you have the files checked out for development:\ngit clone https://github.com/jsonpickle/jsonpickle.git\ncd jsonpickle\npython setup.py develop\n\n\nNumpy Support\njsonpickle includes a built-in numpy extension.  If would like to encode\nsklearn models, numpy arrays, and other numpy-based data then you must\nenable the numpy extension by registering its handlers:\n>>> import jsonpickle.ext.numpy as jsonpickle_numpy\n>>> jsonpickle_numpy.register_handlers()\n\n\nPandas Support\njsonpickle includes a built-in pandas extension.  If would like to encode\npandas DataFrame or Series objects then you must enable the pandas extension\nby registering its handlers:\n>>> import jsonpickle.ext.pandas as jsonpickle_pandas\n>>> jsonpickle_pandas.register_handlers()\n\n\njsonpickleJS\njsonpickleJS\nis a javascript implementation of jsonpickle by Michael Scott Cuthbert.\njsonpickleJS can be extremely useful for projects that have parallel data\nstructures between Python and Javascript.\n\nLicense\nLicensed under the BSD License. See COPYING for details.\nSee jsonpickleJS/LICENSE for details about the jsonpickleJS license.\n\nDevelopment\nUse make to run the unit tests:\nmake test\n\npytest is used to run unit tests internally.\nA tox target is provided to run tests using tox.\nSetting multi=1 tests using all installed and supported Python versions:\nmake tox\nmake tox multi=1\n\njsonpickle itself has no dependencies beyond the Python stdlib.\ntox is required for testing when using the tox test runner only.\nThe testing requirements are specified in requirements-dev.txt.\nIt is recommended to create a virtualenv and run tests from within the\nvirtualenv, or use a tool such as vx\nto activate the virtualenv without polluting the shell environment:\npython3 -mvenv env3x\nvx env3x pip install --requirement requirements-dev.txt\nvx env3x make test\n\njsonpickle supports multiple Python versions, so using a combination of\nmultiple virtualenvs and tox is useful in order to catch compatibility\nissues when developing.\n\nGPG Signing\nReleases before v3.0.0 are signed with davvid's key. v3.0.0 and after are likely signed by Theelx's key. All upcoming releases should be signed by one of these two keys, usually Theelx's key.\n\n\n", "description": "Serializes arbitrary Python objects to JSON and deserializes back to objects."}, {"name": "json5", "readme": "\n\n\n\n\n\n\n\n\n\n\n\npyjson5\nKnown issues\nRunning the tests\nVersion History / Release Notes\n\n\n\n\n\nREADME.md\n\n\n\n\npyjson5\nA Python implementation of the JSON5 data format.\nJSON5 extends the\nJSON data interchange format to make it\nslightly more usable as a configuration language:\n\n\nJavaScript-style comments (both single and multi-line) are legal.\n\n\nObject keys may be unquoted if they are legal ECMAScript identifiers\n\n\nObjects and arrays may end with trailing commas.\n\n\nStrings can be single-quoted, and multi-line string literals are allowed.\n\n\nThere are a few other more minor extensions to JSON; see the above page for\nthe full details.\nThis project implements a reader and writer implementation for Python;\nwhere possible, it mirrors the\nstandard Python JSON API\npackage for ease of use.\nThere is one notable difference from the JSON api: the load() and\nloads() methods support optionally checking for (and rejecting) duplicate\nobject keys; pass allow_duplicate_keys=False to do so (duplicates are\nallowed by default).\nThis is an early release. It has been reasonably well-tested, but it is\nSLOW. It can be 1000-6000x slower than the C-optimized JSON module,\nand is 200x slower (or more) than the pure Python JSON module.\nPlease Note: This library only handles JSON5 documents, it does not\nallow you to read arbitrary JavaScript. For example, bare integers can\nbe legal object keys in JavaScript, but they aren't in JSON5.\nKnown issues\n\n\nDid I mention that it is SLOW?\n\n\nThe implementation follows Python3's json implementation where\npossible. This means that the encoding method to dump() is\nignored, and unicode strings are always returned.\n\n\nThe cls keyword argument that json.load()/json.loads() accepts\nto specify a custom subclass of JSONDecoder is not and will not be\nsupported, because this implementation uses a completely different\napproach to parsing strings and doesn't have anything like the\nJSONDecoder class.\n\n\nThe cls keyword argument that json.dump()/json.dumps() accepts\nis also not supported, for consistency with json5.load(). The default\nkeyword is supported, though, and might be able to serve as a\nworkaround.\n\n\nRunning the tests\nTo run the tests, setup a venv and install the required dependencies with\npip install -e '.[dev]', then run the tests with python setup.py test.\nVersion History / Release Notes\n\n\nv0.9.14 (2023-05-14)\n\nGitHub issue #63\nHandle +Infinity as well as -Infinity and Infinity.\n\n\n\nv0.9.13 (2023-03-16)\n\nGitHub PR #64\nRemove a field from one of the JSON benchmark files to\nreduce confusion in Chromium.\nNo code changes.\n\n\n\nv0.9.12 (2023-01-02)\n\nFix GitHub Actions config file to no longer test against\nPython 3.6 or 3.7. For now we will only test against an\n\"oldest\" release (3.8 in this case) and a \"current\"\nrelease (3.11 in this case).\n\n\n\nv0.9.11 (2023-01-02)\n\nGitHub issue #60\nFixed minor Python2 compatibility issue by referring to\nfloat(\"inf\") instead of math.inf.\n\n\n\nv0.9.10 (2022-08-18)\n\nGitHub issue #58\nUpdated the //README.md to be clear that parsing arbitrary JS\ncode may not work.\nOtherwise, no code changes.\n\n\n\nv0.9.9 (2022-08-01)\n\nGitHub issue #57\nFixed serialization for objects that subclass int or float:\nPreviously we would use the objects str implementation, but\nthat might result in an illegal JSON5 value if the object had\ncustomized str to return something illegal. Instead,\nwe follow the lead of the JSON module and call int.__repr__\nor float.__repr__ directly.\nWhile I was at it, I added tests for dumps(-inf) and dumps(nan)\nwhen those were supposed to be disallowed by allow_nan=False.\n\n\n\nv0.9.8 (2022-05-08)\n\nGitHub issue #47\nFixed error reporting in some cases due to how parsing was handling\nnested rules in the grammar - previously the reported location for\nthe error could be far away from the point where it actually happened.\n\n\n\nv0.9.7 (2022-05-06)\n\nGitHub issue #52\nFixed behavior of default fn in dump and dumps. Previously\nwe didn't require the function to return a string, and so we could\nend up returning something that wasn't actually valid. This change\nnow matches the behavior in the json module. Note: This is a\npotentially breaking change.\n\n\n\nv0.9.6 (2021-06-21)\n\nBump development status classifier to 5 - Production/Stable, which\nthe library feels like it is at this point. If I do end up significantly\nreworking things to speed it up and/or to add round-trip editing,\nthat'll likely be a 2.0. If this version has no reported issues,\nI'll likely promote it to 1.0.\nAlso bump the tested Python versions to 2.7, 3.8 and 3.9, though\nearlier Python3 versions will likely continue to work as well.\nGitHub issue #46\nFix incorrect serialization of custom subtypes\nMake it possible to run the tests if hypothesis isn't installed.\n\n\n\nv0.9.5 (2020-05-26)\n\nMiscellaneous non-source cleanups in the repo, including setting\nup GitHub Actions for a CI system. No changes to the library from\nv0.9.4, other than updating the version.\n\n\n\nv0.9.4 (2020-03-26)\n\nGitHub pull #38\nFix from fredrik@fornwall.net for dumps() crashing when passed\nan empty string as a key in an object.\n\n\n\nv0.9.3 (2020-03-17)\n\nGitHub pull #35\nFix from pastelmind@ for dump() not passing the right args to dumps().\nFix from p.skouzos@novafutur.com to remove the tests directory from\nthe setup call, making the package a bit smaller.\n\n\n\nv0.9.2 (2020-03-02)\n\nGitHub pull #34\nFix from roosephu@ for a badly formatted nested list.\n\n\n\nv0.9.1 (2020-02-09)\n\nGitHub issue #33:\nFix stray trailing comma when dumping an object with an invalid key.\n\n\n\nv0.9.0 (2020-01-30)\n\nGitHub issue #29:\nFix an issue where objects keys that started with a reserved\nword were incorrectly quoted.\nGitHub issue #30:\nFix an issue where dumps() incorrectly thought a data structure\nwas cyclic in some cases.\nGitHub issue #32:\nAllow for non-string keys in dicts passed to dump()/dumps().\nAdd an allow_duplicate_keys=False to prevent possible\nill-formed JSON that might result.\n\n\n\nv0.8.5 (2019-07-04)\n\nGitHub issue #25:\nAdd LICENSE and README.md to the dist.\nGitHub issue #26:\nFix printing of empty arrays and objects with indentation, fix\nmisreporting of the position on parse failures in some cases.\n\n\n\nv0.8.4 (2019-06-11)\n\nUpdated the version history, too.\n\n\n\nv0.8.3 (2019-06-11)\n\nTweaked the README, bumped the version, forgot to update the version\nhistory :).\n\n\n\nv0.8.2 (2019-06-11)\n\nActually bump the version properly, to 0.8.2.\n\n\n\nv0.8.1 (2019-06-11)\n\nFix bug in setup.py that messed up the description. Unfortunately,\nI forgot to bump the version for this, so this also identifies as 0.8.0.\n\n\n\nv0.8.0 (2019-06-11)\n\nAdd allow_duplicate_keys=True as a default argument to\njson5.load()/json5.loads(). If you set the key to False, duplicate\nkeys in a single dict will be rejected. The default is set to True\nfor compatibility with json.load(), earlier versions of json5, and\nbecause it's simply not clear if people would want duplicate checking\nenabled by default.\n\n\n\nv0.7 (2019-03-31)\n\nChanges dump()/dumps() to not quote object keys by default if they are\nlegal identifiers. Passing quote_keys=True will turn that off\nand always quote object keys.\nChanges dump()/dumps() to insert trailing commas after the last item\nin an array or an object if the object is printed across multiple lines\n(i.e., if indent is not None). Passing trailing_commas=False will\nturn that off.\nThe json5.tool command line tool now supports the --indent,\n--[no-]quote-keys, and --[no-]trailing-commas flags to allow\nfor more control over the output, in addition to the existing\n--as-json flag.\nThe json5.tool command line tool no longer supports reading from\nmultiple files, you can now only read from a single file or\nfrom standard input.\nThe implementation no longer relies on the standard json module\nfor anything. The output should still match the json module (except\nas noted above) and discrepancies should be reported as bugs.\n\n\n\nv0.6.2 (2019-03-08)\n\nFix GitHub issue #23 and\npass through unrecognized escape sequences.\n\n\n\nv0.6.1 (2018-05-22)\n\nCleaned up a couple minor nits in the package.\n\n\n\nv0.6.0 (2017-11-28)\n\nFirst implementation that attempted to implement 100% of the spec.\n\n\n\nv0.5.0 (2017-09-04)\n\nFirst implementation that supported the full set of kwargs that\nthe json module supports.\n\n\n\n\n\n", "description": "Implements JSON5 data format which extends JSON with comments, unquoted keys, trailing commas etc."}, {"name": "joblib", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nGetting the latest code\nInstalling\nDependencies\nWorkflow to contribute\nRunning the test suite\nBuilding the docs\nMaking a source tarball\nMaking a release and uploading it to PyPI\nUpdating the changelog\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n   \n\nThe homepage of joblib with user documentation is located on:\nhttps://joblib.readthedocs.io\n\nGetting the latest code\nTo get the latest code using git, simply type:\ngit clone git://github.com/joblib/joblib.git\n\nIf you don't have git installed, you can download a zip or tarball\nof the latest code: http://github.com/joblib/joblib/archives/master\n\nInstalling\nYou can use pip to install joblib:\npip install joblib\n\nfrom any directory or:\npython setup.py install\n\nfrom the source directory.\n\nDependencies\n\nJoblib has no mandatory dependencies besides Python (supported versions are\n3.7+).\nJoblib has an optional dependency on Numpy (at least version 1.6.1) for array\nmanipulation.\nJoblib includes its own vendored copy of\nloky for process management.\nJoblib can efficiently dump and load numpy arrays but does not require numpy\nto be installed.\nJoblib has an optional dependency on\npython-lz4 as a faster alternative to\nzlib and gzip for compressed serialization.\nJoblib has an optional dependency on psutil to mitigate memory leaks in\nparallel worker processes.\nSome examples require external dependencies such as pandas. See the\ninstructions in the Building the docs section for details.\n\n\nWorkflow to contribute\nTo contribute to joblib, first create an account on github. Once this is done, fork the joblib repository to have your own repository,\nclone it using 'git clone' on the computers where you want to work. Make\nyour changes in your clone, push them to your github account, test them\non several computers, and when you are happy with them, send a pull\nrequest to the main repository.\n\nRunning the test suite\nTo run the test suite, you need the pytest (version >= 3) and coverage modules.\nRun the test suite using:\npytest joblib\n\nfrom the root of the project.\n\nBuilding the docs\nTo build the docs you need to have sphinx (>=1.4) and some dependencies\ninstalled:\npip install -U -r .readthedocs-requirements.txt\n\nThe docs can then be built with the following command:\nmake doc\n\nThe html docs are located in the doc/_build/html directory.\n\nMaking a source tarball\nTo create a source tarball, eg for packaging or distributing, run the\nfollowing command:\npython setup.py sdist\n\nThe tarball will be created in the dist directory. This command will\ncompile the docs, and the resulting tarball can be installed with\nno extra dependencies than the Python standard library. You will need\nsetuptool and sphinx.\n\nMaking a release and uploading it to PyPI\nThis command is only run by project manager, to make a release, and\nupload in to PyPI:\npython setup.py sdist bdist_wheel\ntwine upload dist/*\n\nNote that the documentation should automatically get updated at each git\npush. If that is not the case, try building th doc locally and resolve\nany doc build error (in particular when running the examples).\n\nUpdating the changelog\nChanges are listed in the CHANGES.rst file. They must be manually updated\nbut, the following git command may be used to generate the lines:\ngit log --abbrev-commit --date=short --no-merges --sparse\n\n\n\n", "description": "Provides utilities for lightweight pipelining and caching of Python functions."}, {"name": "Jinja2", "readme": "\nJinja is a fast, expressive, extensible templating engine. Special\nplaceholders in the template allow writing code similar to Python\nsyntax. Then the template is passed data to render the final document.\nIt includes:\n\nTemplate inheritance and inclusion.\nDefine and import macros within templates.\nHTML templates can use autoescaping to prevent XSS from untrusted\nuser input.\nA sandboxed environment can safely render untrusted templates.\nAsyncIO support for generating templates and calling async\nfunctions.\nI18N support with Babel.\nTemplates are compiled to optimized Python code just-in-time and\ncached, or can be compiled ahead-of-time.\nExceptions point to the correct line in templates to make debugging\neasier.\nExtensible filters, tests, functions, and even syntax.\n\nJinja\u2019s philosophy is that while application logic belongs in Python if\npossible, it shouldn\u2019t make the template designer\u2019s job difficult by\nrestricting functionality too much.\n\nInstalling\nInstall and update using pip:\n$ pip install -U Jinja2\n\n\nIn A Nutshell\n{% extends \"base.html\" %}\n{% block title %}Members{% endblock %}\n{% block content %}\n  <ul>\n  {% for user in users %}\n    <li><a href=\"{{ user.url }}\">{{ user.username }}</a></li>\n  {% endfor %}\n  </ul>\n{% endblock %}\n\n\nDonate\nThe Pallets organization develops and supports Jinja and other popular\npackages. In order to grow the community of contributors and users, and\nallow the maintainers to devote more time to the projects, please\ndonate today.\n\n\nLinks\n\nDocumentation: https://jinja.palletsprojects.com/\nChanges: https://jinja.palletsprojects.com/changes/\nPyPI Releases: https://pypi.org/project/Jinja2/\nSource Code: https://github.com/pallets/jinja/\nIssue Tracker: https://github.com/pallets/jinja/issues/\nWebsite: https://palletsprojects.com/p/jinja/\nTwitter: https://twitter.com/PalletsTeam\nChat: https://discord.gg/pallets\n\n\n", "description": "Templating engine for Python, with template inheritance and automatic HTML escaping."}, {"name": "jedi", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nJedi - an awesome autocompletion, static analysis and refactoring library for Python\nIssues & Questions\nInstallation\nFeatures and Limitations\nAPI\nAutocompletion / Goto / Documentation\nAutocompletion in your REPL (IPython, etc.)\nStatic Analysis\nRefactoring\nCode Search\nDevelopment\nTesting\nAcknowledgements\n\n\n\n\n\nREADME.rst\n\n\n\n\nJedi - an awesome autocompletion, static analysis and refactoring library for Python\n\n\n\n\n\n\nJedi is a static analysis tool for Python that is typically used in\nIDEs/editors plugins. Jedi has a focus on autocompletion and goto\nfunctionality. Other features include refactoring, code search and finding\nreferences.\nJedi has a simple API to work with. There is a reference implementation as a\nVIM-Plugin. Autocompletion in your\nREPL is also possible, IPython uses it natively and for the CPython REPL you\ncan install it. Jedi is well tested and bugs should be rare.\nJedi can currently be used with the following editors/projects:\n\nVim (jedi-vim, YouCompleteMe, deoplete-jedi, completor.vim)\nVisual Studio Code (via Python Extension)\nEmacs (Jedi.el, company-mode, elpy, anaconda-mode, ycmd)\nSublime Text (SublimeJEDI [ST2 + ST3], anaconda [only ST3])\nTextMate (Not sure if it's actually working)\nKate version 4.13+ supports it natively, you have to enable it, though.  [see]\nAtom (autocomplete-python-jedi)\nGNOME Builder (with support for GObject Introspection)\nGedit (gedi)\nwdb - Web Debugger\nEric IDE\nIPython 6.0.0+\nxonsh shell has jedi extension\n\nand many more!\nThere are a few language servers that use Jedi:\n\njedi-language-server\npython-language-server (currently unmaintained)\npython-lsp-server (fork from python-language-server)\nanakin-language-server\n\nHere are some pictures taken from jedi-vim:\n\nCompletion for almost anything:\n\nDocumentation:\n\nGet the latest version from github\n(master branch should always be kind of stable/working).\nDocs are available at https://jedi.readthedocs.org/en/latest/. Pull requests with enhancements\nand/or fixes are awesome and most welcome. Jedi uses semantic versioning.\nIf you want to stay up-to-date with releases, please subscribe to this\nmailing list: https://groups.google.com/g/jedi-announce. To subscribe you can\nsimply send an empty email to jedi-announce+subscribe@googlegroups.com.\n\nIssues & Questions\nYou can file issues and questions in the issue tracker\n<https://github.com/davidhalter/jedi/>. Alternatively you can also ask on\nStack Overflow with\nthe label python-jedi.\n\nInstallation\nCheck out the docs.\n\nFeatures and Limitations\nJedi's features are listed here:\nFeatures.\nYou can run Jedi on Python 3.6+ but it should also\nunderstand code that is older than those versions. Additionally you should be\nable to use Virtualenvs\nvery well.\nTips on how to use Jedi efficiently can be found here.\n\nAPI\nYou can find a comprehensive documentation for the\nAPI here.\n\nAutocompletion / Goto / Documentation\nThere are the following commands:\n\njedi.Script.goto\njedi.Script.infer\njedi.Script.help\njedi.Script.complete\njedi.Script.get_references\njedi.Script.get_signatures\njedi.Script.get_context\n\nThe returned objects are very powerful and are really all you might need.\n\nAutocompletion in your REPL (IPython, etc.)\nJedi is a dependency of IPython. Autocompletion in IPython with Jedi is\ntherefore possible without additional configuration.\nHere is an example video how REPL completion\ncan look like.\nFor the python shell you can enable tab completion in a REPL.\n\nStatic Analysis\nFor a lot of forms of static analysis, you can try to use\njedi.Script(...).get_names. It will return a list of names that you can\nthen filter and work with. There is also a way to list the syntax errors in a\nfile: jedi.Script.get_syntax_errors.\n\nRefactoring\nJedi supports the following refactorings:\n\njedi.Script.inline\njedi.Script.rename\njedi.Script.extract_function\njedi.Script.extract_variable\n\n\nCode Search\nThere is support for module search with jedi.Script.search, and project\nsearch for jedi.Project.search. The way to search is either by providing a\nname like foo or by using dotted syntax like foo.bar. Additionally you\ncan provide the API type like class foo.bar.Bar. There are also the\nfunctions jedi.Script.complete_search and jedi.Project.complete_search.\n\nDevelopment\nThere's a pretty good and extensive development documentation.\n\nTesting\nThe test suite uses pytest:\npip install pytest\n\nIf you want to test only a specific Python version (e.g. Python 3.8), it is as\neasy as:\npython3.8 -m pytest\n\nFor more detailed information visit the testing documentation.\n\nAcknowledgements\nThanks a lot to all the\ncontributors!\n\n\n", "description": "Autocompletion, static analysis and refactoring library for Python."}, {"name": "jax", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nJAX: Autograd and XLA\nWhat is JAX?\nContents\nQuickstart: Colab in the Cloud\nTransformations\nAutomatic differentiation with grad\nCompilation with jit\nAuto-vectorization with vmap\nSPMD programming with pmap\nCurrent gotchas\nInstallation\npip installation: CPU\npip installation: GPU (CUDA, installed via pip, easier)\npip installation: GPU (CUDA, installed locally, harder)\nDocker containers: NVIDIA GPU\npip installation: Google Cloud TPU\npip installation: Apple GPUs\nConda installation\nBuilding JAX from source\nNeural network libraries\nCiting JAX\nReference documentation\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\n\nJAX: Autograd and XLA\n\n\nQuickstart\n| Transformations\n| Install guide\n| Neural net libraries\n| Change logs\n| Reference docs\nWhat is JAX?\nJAX is Autograd and XLA,\nbrought together for high-performance machine learning research.\nWith its updated version of Autograd,\nJAX can automatically differentiate native\nPython and NumPy functions. It can differentiate through loops, branches,\nrecursion, and closures, and it can take derivatives of derivatives of\nderivatives. It supports reverse-mode differentiation (a.k.a. backpropagation)\nvia grad as well as forward-mode differentiation,\nand the two can be composed arbitrarily to any order.\nWhat\u2019s new is that JAX uses XLA\nto compile and run your NumPy programs on GPUs and TPUs. Compilation happens\nunder the hood by default, with library calls getting just-in-time compiled and\nexecuted. But JAX also lets you just-in-time compile your own Python functions\ninto XLA-optimized kernels using a one-function API,\njit. Compilation and automatic differentiation can be\ncomposed arbitrarily, so you can express sophisticated algorithms and get\nmaximal performance without leaving Python. You can even program multiple GPUs\nor TPU cores at once using pmap, and\ndifferentiate through the whole thing.\nDig a little deeper, and you'll see that JAX is really an extensible system for\ncomposable function transformations. Both\ngrad and jit\nare instances of such transformations. Others are\nvmap for automatic vectorization and\npmap for single-program multiple-data (SPMD)\nparallel programming of multiple accelerators, with more to come.\nThis is a research project, not an official Google product. Expect bugs and\nsharp edges.\nPlease help by trying it out, reporting\nbugs, and letting us know what you\nthink!\nimport jax.numpy as jnp\nfrom jax import grad, jit, vmap\n\ndef predict(params, inputs):\n  for W, b in params:\n    outputs = jnp.dot(inputs, W) + b\n    inputs = jnp.tanh(outputs)  # inputs to the next layer\n  return outputs                # no activation on last layer\n\ndef loss(params, inputs, targets):\n  preds = predict(params, inputs)\n  return jnp.sum((preds - targets)**2)\n\ngrad_loss = jit(grad(loss))  # compiled gradient evaluation function\nperex_grads = jit(vmap(grad_loss, in_axes=(None, 0, 0)))  # fast per-example grads\nContents\n\nQuickstart: Colab in the Cloud\nTransformations\nCurrent gotchas\nInstallation\nNeural net libraries\nCiting JAX\nReference documentation\n\nQuickstart: Colab in the Cloud\nJump right in using a notebook in your browser, connected to a Google Cloud GPU.\nHere are some starter notebooks:\n\nThe basics: NumPy on accelerators, grad for differentiation, jit for compilation, and vmap for vectorization\nTraining a Simple Neural Network, with TensorFlow Dataset Data Loading\n\nJAX now runs on Cloud TPUs. To try out the preview, see the Cloud TPU\nColabs.\nFor a deeper dive into JAX:\n\nThe Autodiff Cookbook, Part 1: easy and powerful automatic differentiation in JAX\nCommon gotchas and sharp edges\nSee the full list of\nnotebooks.\n\nTransformations\nAt its core, JAX is an extensible system for transforming numerical functions.\nHere are four transformations of primary interest: grad, jit, vmap, and\npmap.\nAutomatic differentiation with grad\nJAX has roughly the same API as Autograd.\nThe most popular function is\ngrad\nfor reverse-mode gradients:\nfrom jax import grad\nimport jax.numpy as jnp\n\ndef tanh(x):  # Define a function\n  y = jnp.exp(-2.0 * x)\n  return (1.0 - y) / (1.0 + y)\n\ngrad_tanh = grad(tanh)  # Obtain its gradient function\nprint(grad_tanh(1.0))   # Evaluate it at x = 1.0\n# prints 0.4199743\nYou can differentiate to any order with grad.\nprint(grad(grad(grad(tanh)))(1.0))\n# prints 0.62162673\nFor more advanced autodiff, you can use\njax.vjp for\nreverse-mode vector-Jacobian products and\njax.jvp for\nforward-mode Jacobian-vector products. The two can be composed arbitrarily with\none another, and with other JAX transformations. Here's one way to compose those\nto make a function that efficiently computes full Hessian\nmatrices:\nfrom jax import jit, jacfwd, jacrev\n\ndef hessian(fun):\n  return jit(jacfwd(jacrev(fun)))\nAs with Autograd, you're free to use\ndifferentiation with Python control structures:\ndef abs_val(x):\n  if x > 0:\n    return x\n  else:\n    return -x\n\nabs_val_grad = grad(abs_val)\nprint(abs_val_grad(1.0))   # prints 1.0\nprint(abs_val_grad(-1.0))  # prints -1.0 (abs_val is re-evaluated)\nSee the reference docs on automatic\ndifferentiation\nand the JAX Autodiff\nCookbook\nfor more.\nCompilation with jit\nYou can use XLA to compile your functions end-to-end with\njit,\nused either as an @jit decorator or as a higher-order function.\nimport jax.numpy as jnp\nfrom jax import jit\n\ndef slow_f(x):\n  # Element-wise ops see a large benefit from fusion\n  return x * x + x * 2.0\n\nx = jnp.ones((5000, 5000))\nfast_f = jit(slow_f)\n%timeit -n10 -r3 fast_f(x)  # ~ 4.5 ms / loop on Titan X\n%timeit -n10 -r3 slow_f(x)  # ~ 14.5 ms / loop (also on GPU via JAX)\nYou can mix jit and grad and any other JAX transformation however you like.\nUsing jit puts constraints on the kind of Python control flow\nthe function can use; see\nthe Gotchas\nNotebook\nfor more.\nAuto-vectorization with vmap\nvmap is\nthe vectorizing map.\nIt has the familiar semantics of mapping a function along array axes, but\ninstead of keeping the loop on the outside, it pushes the loop down into a\nfunction\u2019s primitive operations for better performance.\nUsing vmap can save you from having to carry around batch dimensions in your\ncode. For example, consider this simple unbatched neural network prediction\nfunction:\ndef predict(params, input_vec):\n  assert input_vec.ndim == 1\n  activations = input_vec\n  for W, b in params:\n    outputs = jnp.dot(W, activations) + b  # `activations` on the right-hand side!\n    activations = jnp.tanh(outputs)        # inputs to the next layer\n  return outputs                           # no activation on last layer\nWe often instead write jnp.dot(activations, W) to allow for a batch dimension on the\nleft side of activations, but we\u2019ve written this particular prediction function to\napply only to single input vectors. If we wanted to apply this function to a\nbatch of inputs at once, semantically we could just write\nfrom functools import partial\npredictions = jnp.stack(list(map(partial(predict, params), input_batch)))\nBut pushing one example through the network at a time would be slow! It\u2019s better\nto vectorize the computation, so that at every layer we\u2019re doing matrix-matrix\nmultiplication rather than matrix-vector multiplication.\nThe vmap function does that transformation for us. That is, if we write\nfrom jax import vmap\npredictions = vmap(partial(predict, params))(input_batch)\n# or, alternatively\npredictions = vmap(predict, in_axes=(None, 0))(params, input_batch)\nthen the vmap function will push the outer loop inside the function, and our\nmachine will end up executing matrix-matrix multiplications exactly as if we\u2019d\ndone the batching by hand.\nIt\u2019s easy enough to manually batch a simple neural network without vmap, but\nin other cases manual vectorization can be impractical or impossible. Take the\nproblem of efficiently computing per-example gradients: that is, for a fixed set\nof parameters, we want to compute the gradient of our loss function evaluated\nseparately at each example in a batch. With vmap, it\u2019s easy:\nper_example_gradients = vmap(partial(grad(loss), params))(inputs, targets)\nOf course, vmap can be arbitrarily composed with jit, grad, and any other\nJAX transformation! We use vmap with both forward- and reverse-mode automatic\ndifferentiation for fast Jacobian and Hessian matrix calculations in\njax.jacfwd, jax.jacrev, and jax.hessian.\nSPMD programming with pmap\nFor parallel programming of multiple accelerators, like multiple GPUs, use\npmap.\nWith pmap you write single-program multiple-data (SPMD) programs, including\nfast parallel collective communication operations. Applying pmap will mean\nthat the function you write is compiled by XLA (similarly to jit), then\nreplicated and executed in parallel across devices.\nHere's an example on an 8-GPU machine:\nfrom jax import random, pmap\nimport jax.numpy as jnp\n\n# Create 8 random 5000 x 6000 matrices, one per GPU\nkeys = random.split(random.PRNGKey(0), 8)\nmats = pmap(lambda key: random.normal(key, (5000, 6000)))(keys)\n\n# Run a local matmul on each device in parallel (no data transfer)\nresult = pmap(lambda x: jnp.dot(x, x.T))(mats)  # result.shape is (8, 5000, 5000)\n\n# Compute the mean on each device in parallel and print the result\nprint(pmap(jnp.mean)(result))\n# prints [1.1566595 1.1805978 ... 1.2321935 1.2015157]\nIn addition to expressing pure maps, you can use fast collective communication\noperations\nbetween devices:\nfrom functools import partial\nfrom jax import lax\n\n@partial(pmap, axis_name='i')\ndef normalize(x):\n  return x / lax.psum(x, 'i')\n\nprint(normalize(jnp.arange(4.)))\n# prints [0.         0.16666667 0.33333334 0.5       ]\nYou can even nest pmap functions for more\nsophisticated communication patterns.\nIt all composes, so you're free to differentiate through parallel computations:\nfrom jax import grad\n\n@pmap\ndef f(x):\n  y = jnp.sin(x)\n  @pmap\n  def g(z):\n    return jnp.cos(z) * jnp.tan(y.sum()) * jnp.tanh(x).sum()\n  return grad(lambda w: jnp.sum(g(w)))(x)\n\nprint(f(x))\n# [[ 0.        , -0.7170853 ],\n#  [-3.1085174 , -0.4824318 ],\n#  [10.366636  , 13.135289  ],\n#  [ 0.22163185, -0.52112055]]\n\nprint(grad(lambda x: jnp.sum(f(x)))(x))\n# [[ -3.2369726,  -1.6356447],\n#  [  4.7572474,  11.606951 ],\n#  [-98.524414 ,  42.76499  ],\n#  [ -1.6007166,  -1.2568436]]\nWhen reverse-mode differentiating a pmap function (e.g. with grad), the\nbackward pass of the computation is parallelized just like the forward pass.\nSee the SPMD\nCookbook\nand the SPMD MNIST classifier from scratch\nexample\nfor more.\nCurrent gotchas\nFor a more thorough survey of current gotchas, with examples and explanations,\nwe highly recommend reading the Gotchas\nNotebook.\nSome standouts:\n\nJAX transformations only work on pure functions, which don't have side-effects and respect referential transparency (i.e. object identity testing with is isn't preserved). If you use a JAX transformation on an impure Python function, you might see an error like Exception: Can't lift Traced...  or Exception: Different traces at same level.\nIn-place mutating updates of\narrays, like x[i] += y, aren't supported, but there are functional alternatives. Under a jit, those functional alternatives will reuse buffers in-place automatically.\nRandom numbers are\ndifferent, but for good reasons.\nIf you're looking for convolution\noperators,\nthey're in the jax.lax package.\nJAX enforces single-precision (32-bit, e.g. float32) values by default, and\nto enable\ndouble-precision\n(64-bit, e.g. float64) one needs to set the jax_enable_x64 variable at\nstartup (or set the environment variable JAX_ENABLE_X64=True).\nOn TPU, JAX uses 32-bit values by default for everything except internal\ntemporary variables in 'matmul-like' operations, such as jax.numpy.dot and lax.conv.\nThose ops have a precision parameter which can be used to simulate\ntrue 32-bit, with a cost of possibly slower runtime.\nSome of NumPy's dtype promotion semantics involving a mix of Python scalars\nand NumPy types aren't preserved, namely np.add(1, np.array([2], np.float32)).dtype is float64 rather than float32.\nSome transformations, like jit, constrain how you can use Python control\nflow.\nYou'll always get loud errors if something goes wrong. You might have to use\njit's static_argnums\nparameter,\nstructured control flow\nprimitives\nlike\nlax.scan,\nor just use jit on smaller subfunctions.\n\nInstallation\nJAX is written in pure Python, but it depends on XLA, which needs to be\ninstalled as the jaxlib package. Use the following instructions to install a\nbinary package with pip or conda, to use a\nDocker container, or to build JAX from\nsource.\nWe support installing or building jaxlib on Linux (Ubuntu 20.04 or later) and\nmacOS (10.12 or later) platforms. There is also experimental native Windows\nsupport.\nWindows users can use JAX on CPU and GPU via the Windows Subsystem for\nLinux, or alternatively\nthey can use the experimental native Windows CPU-only support.\npip installation: CPU\nWe currently release jaxlib wheels for the following\noperating systems and architectures:\n\nLinux, x86-64\nMac, Intel\nMac, ARM\nWindows, x86-64 (experimental)\n\nTo install a CPU-only version of JAX, which might be useful for doing local\ndevelopment on a laptop, you can run\npip install --upgrade pip\npip install --upgrade \"jax[cpu]\"\nOn Windows, you may also need to install the\nMicrosoft Visual Studio 2019 Redistributable\nif it is not already installed on your machine.\nOther operating systems and architectures require building from source. Trying\nto pip install on other operating systems and architectures may lead to jaxlib\nnot being installed alongside jax, although jax may successfully install\n(but fail at runtime).\npip installation: GPU (CUDA, installed via pip, easier)\nThere are two ways to install JAX with NVIDIA GPU support: using CUDA and CUDNN\ninstalled from pip wheels, and using a self-installed CUDA/CUDNN. We recommend\ninstalling CUDA and CUDNN using the pip wheels, since it is much easier!\nJAX supports NVIDIA GPUs that have SM version 5.2 (Maxwell) or newer.\nNote that Kepler-series GPUs are no longer supported by JAX since\nNVIDIA has dropped support for Kepler GPUs in its software.\nYou must first install the NVIDIA driver. We\nrecommend installing the newest driver available from NVIDIA, but the driver\nmust be version >= 525.60.13 for CUDA 12 and >= 450.80.02 for CUDA 11 on Linux.\nIf you need to use an newer CUDA toolkit with an older driver, for example\non a cluster where you cannot update the NVIDIA driver easily, you may be\nable to use the\nCUDA forward compatibility packages\nthat NVIDIA provides for this purpose.\npip install --upgrade pip\n\n# CUDA 12 installation\n# Note: wheels only available on linux.\npip install --upgrade \"jax[cuda12_pip]\" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html\n\n# CUDA 11 installation\n# Note: wheels only available on linux.\npip install --upgrade \"jax[cuda11_pip]\" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html\npip installation: GPU (CUDA, installed locally, harder)\nIf you prefer to use a preinstalled copy of CUDA, you must first\ninstall CUDA and\nCuDNN.\nJAX provides pre-built CUDA-compatible wheels for Linux x86_64 only. Other\ncombinations of operating system and architecture are possible, but require\nbuilding from source.\nYou should use an NVIDIA driver version that is at least as new as your\nCUDA toolkit's corresponding driver version.\nIf you need to use an newer CUDA toolkit with an older driver, for example\non a cluster where you cannot update the NVIDIA driver easily, you may be\nable to use the\nCUDA forward compatibility packages\nthat NVIDIA provides for this purpose.\nJAX currently ships two CUDA wheel variants:\n\nCUDA 12.0 and CuDNN 8.9.\nCUDA 11.8 and CuDNN 8.6.\n\nYou may use a JAX wheel provided the major version of your CUDA and CuDNN\ninstallation matches, and the minor version is at least as new as the version\nJAX expects. For example, you would be able to use the CUDA 12.0 wheel with\nCUDA 12.1 and CuDNN 8.9.\nYour CUDA installation must also be new enough to support your GPU. If you have\nan Ada Lovelace (e.g., RTX 4080) or Hopper (e.g., H100) GPU,\nyou must use CUDA 11.8 or newer.\nTo install, run\npip install --upgrade pip\n\n# Installs the wheel compatible with CUDA 12 and cuDNN 8.9 or newer.\n# Note: wheels only available on linux.\npip install --upgrade \"jax[cuda12_local]\" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html\n\n# Installs the wheel compatible with CUDA 11 and cuDNN 8.6 or newer.\n# Note: wheels only available on linux.\npip install --upgrade \"jax[cuda11_local]\" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html\nThese pip installations do not work with Windows, and may fail silently; see\nabove.\nYou can find your CUDA version with the command:\nnvcc --version\nSome GPU functionality expects the CUDA installation to be at\n/usr/local/cuda-X.X, where X.X should be replaced with the CUDA version number\n(e.g. cuda-11.8). If CUDA is installed elsewhere on your system, you can either\ncreate a symlink:\nsudo ln -s /path/to/cuda /usr/local/cuda-X.X\nPlease let us know on the issue tracker\nif you run into any errors or problems with the prebuilt wheels.\nDocker containers: NVIDIA GPU\nNVIDIA provides the JAX\nToolbox containers, which are\nbleeding edge containers containing nightly releases of jax and some\nmodels/frameworks.\npip installation: Google Cloud TPU\nJAX provides pre-built wheels for\nGoogle Cloud TPU.\nTo install JAX along with appropriate versions of jaxlib and libtpu, you can run\nthe following in your cloud TPU VM:\npip install jax[tpu] -f https://storage.googleapis.com/jax-releases/libtpu_releases.html\nFor interactive notebook users: Colab TPUs no longer support JAX as of\nJAX version 0.4. However, for an interactive TPU notebook in the cloud, you can\nuse Kaggle TPU notebooks, which fully\nsupport JAX.\npip installation: Apple GPUs\nApple provides an experimental Metal plugin for Apple GPU hardware. For details,\nsee\nApple's JAX on Metal documentation.\nThere are several caveats with the Metal plugin:\n\nthe Metal plugin is new and experimental and has a number of\nknown issues.\nPlease report any issues on the JAX issue tracker.\nthe Metal plugin currently requires very specific versions of jax and\njaxlib. This restriction will be relaxed over time as the plugin API\nmatures.\n\nConda installation\nThere is a community-supported Conda build of jax. To install using conda,\nsimply run\nconda install jax -c conda-forge\nTo install on a machine with an NVIDIA GPU, run\nconda install jaxlib=*=*cuda* jax cuda-nvcc -c conda-forge -c nvidia\nNote the cudatoolkit distributed by conda-forge is missing ptxas, which\nJAX requires. You must therefore either install the cuda-nvcc package from\nthe nvidia channel, or install CUDA on your machine separately so that ptxas\nis in your path. The channel order above is important (conda-forge before\nnvidia).\nIf you would like to override which release of CUDA is used by JAX, or to\ninstall the CUDA build on a machine without GPUs, follow the instructions in the\nTips & tricks\nsection of the conda-forge website.\nSee the conda-forge\njaxlib and\njax repositories\nfor more details.\nBuilding JAX from source\nSee Building JAX from\nsource.\nNeural network libraries\nMultiple Google research groups develop and share libraries for training neural\nnetworks in JAX. If you want a fully featured library for neural network\ntraining with examples and how-to guides, try\nFlax.\nIn addition, DeepMind has open-sourced an ecosystem of libraries around\nJAX\nincluding Haiku for neural network\nmodules, Optax for gradient processing and\noptimization, RLax for RL algorithms, and\nchex for reliable code and testing. (Watch\nthe NeurIPS 2020 JAX Ecosystem at DeepMind talk\nhere)\nCiting JAX\nTo cite this repository:\n@software{jax2018github,\n  author = {James Bradbury and Roy Frostig and Peter Hawkins and Matthew James Johnson and Chris Leary and Dougal Maclaurin and George Necula and Adam Paszke and Jake Vander{P}las and Skye Wanderman-{M}ilne and Qiao Zhang},\n  title = {{JAX}: composable transformations of {P}ython+{N}um{P}y programs},\n  url = {http://github.com/google/jax},\n  version = {0.3.13},\n  year = {2018},\n}\n\nIn the above bibtex entry, names are in alphabetical order, the version number\nis intended to be that from jax/version.py, and\nthe year corresponds to the project's open-source release.\nA nascent version of JAX, supporting only automatic differentiation and\ncompilation to XLA, was described in a paper that appeared at SysML\n2018. We're currently working on\ncovering JAX's ideas and capabilities in a more comprehensive and up-to-date\npaper.\nReference documentation\nFor details about the JAX API, see the\nreference documentation.\nFor getting started as a JAX developer, see the\ndeveloper documentation.\n\n\n", "description": "Autograd and XLA compiler for NumPy written in Python.", "category": "Machine learning"}, {"name": "itsdangerous", "readme": "\n\u2026 so better sign this\nVarious helpers to pass data to untrusted environments and to get it\nback safe and sound. Data is cryptographically signed to ensure that a\ntoken has not been tampered with.\nIt\u2019s possible to customize how data is serialized. Data is compressed as\nneeded. A timestamp can be added and verified automatically while\nloading a token.\n\nInstalling\nInstall and update using pip:\npip install -U itsdangerous\n\n\nA Simple Example\nHere\u2019s how you could generate a token for transmitting a user\u2019s id and\nname between web requests.\nfrom itsdangerous import URLSafeSerializer\nauth_s = URLSafeSerializer(\"secret key\", \"auth\")\ntoken = auth_s.dumps({\"id\": 5, \"name\": \"itsdangerous\"})\n\nprint(token)\n# eyJpZCI6NSwibmFtZSI6Iml0c2Rhbmdlcm91cyJ9.6YP6T0BaO67XP--9UzTrmurXSmg\n\ndata = auth_s.loads(token)\nprint(data[\"name\"])\n# itsdangerous\n\n\nDonate\nThe Pallets organization develops and supports ItsDangerous and other\npopular packages. In order to grow the community of contributors and\nusers, and allow the maintainers to devote more time to the projects,\nplease donate today.\n\n\nLinks\n\nDocumentation: https://itsdangerous.palletsprojects.com/\nChanges: https://itsdangerous.palletsprojects.com/changes/\nPyPI Releases: https://pypi.org/project/ItsDangerous/\nSource Code: https://github.com/pallets/itsdangerous/\nIssue Tracker: https://github.com/pallets/itsdangerous/issues/\nWebsite: https://palletsprojects.com/p/itsdangerous/\nTwitter: https://twitter.com/PalletsTeam\nChat: https://discord.gg/pallets\n\n\n", "description": "Untrusted data serialization/deserialization library with signatures."}, {"name": "isodate", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nISO 8601 date/time parser\nDocumentation\nInstallation\nLimitations\nFurther information\n\n\n\n\n\nREADME.rst\n\n\n\n\nISO 8601 date/time parser\n\n\n\n\n\n\nThis module implements ISO 8601 date, time and duration parsing.\nThe implementation follows ISO8601:2004 standard, and implements only\ndate/time representations mentioned in the standard. If something is not\nmentioned there, then it is treated as non existent, and not as an allowed\noption.\nFor instance, ISO8601:2004 never mentions 2 digit years. So, it is not\nintended by this module to support 2 digit years. (while it may still\nbe valid as ISO date, because it is not explicitly forbidden.)\nAnother example is, when no time zone information is given for a time,\nthen it should be interpreted as local time, and not UTC.\nAs this module maps ISO 8601 dates/times to standard Python data types, like\ndate, time, datetime and timedelta, it is not possible to convert\nall possible ISO 8601 dates/times. For instance, dates before 0001-01-01 are\nnot allowed by the Python date and datetime classes. Additionally\nfractional seconds are limited to microseconds. That means if the parser finds\nfor instance nanoseconds it will round it to microseconds.\n\nDocumentation\n\nThe following parsing methods are available.\n\n\nparse_time:\nparses an ISO 8601 time string into a time object\n\n\n\nparse_date:\nparses an ISO 8601 date string into a date object\n\n\n\nparse_datetime:\nparses an ISO 8601 date-time string into a datetime object\n\n\n\nparse_duration:\nparses an ISO 8601 duration string into a timedelta or Duration\nobject.\n\n\n\nparse_tzinfo:\nparses the time zone info part of an ISO 8601 string into a\ntzinfo object.\n\n\n\n\n\nAs ISO 8601 allows to define durations in years and months, and timedelta\ndoes not handle years and months, this module provides a Duration class,\nwhich can be used almost like a timedelta object (with some limitations).\nHowever, a Duration object can be converted into a timedelta object.\nThere are also ISO formatting methods for all supported data types. Each\nxxx_isoformat method accepts a format parameter. The default format is\nalways the ISO 8601 expanded format. This is the same format used by\ndatetime.isoformat:\n\n\n\ntime_isoformat:\nIntended to create ISO time strings with default format\nhh:mm:ssZ.\n\n\n\ndate_isoformat:\nIntended to create ISO date strings with default format\nyyyy-mm-dd.\n\n\n\ndatetime_isoformat:\nIntended to create ISO date-time strings with default format\nyyyy-mm-ddThh:mm:ssZ.\n\n\n\nduration_isoformat:\nIntended to create ISO duration strings with default format\nPnnYnnMnnDTnnHnnMnnS.\n\n\n\ntz_isoformat:\nIntended to create ISO time zone strings with default format\nhh:mm.\n\n\n\nstrftime:\nA re-implementation mostly compatible with Python's strftime, but\nsupports only those format strings, which can also be used for dates\nprior 1900. This method also understands how to format datetime and\nDuration instances.\n\n\n\n\n\nInstallation\nThis module can easily be installed with Python standard installation methods.\nEither use python setup.py install or in case you have setuptools or\ndistribute available, you can also use easy_install.\n\nLimitations\n\n\nThe parser accepts several date/time representation which should be invalid\naccording to ISO 8601 standard.\nfor date and time together, this parser accepts a mixture of basic and extended format.\ne.g. the date could be in basic format, while the time is accepted in extended format.\nIt also allows short dates and times in date-time strings.\nFor incomplete dates, the first day is chosen. e.g. 19th century results in a date of\n1901-01-01.\nnegative Duration and timedelta value are not fully supported yet.\n\n\n\n\n\nFurther information\nThe doc strings and unit tests should provide rather detailed information about\nthe methods and their limitations.\nThe source release provides a setup.py script,\nwhich can be used to run the unit tests included.\nSource code is available at http://github.com/gweis/isodate.\n\n\n", "description": "ISO 8601 date/time parser and formatter."}, {"name": "ipython", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nIPython: Productive Interactive Computing\nOverview\nMain features of IPython\nDevelopment and Instant running\nIPython requires Python version 3 or above\nAlternatives to IPython\nIgnoring commits with git blame.ignoreRevsFile\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nIPython: Productive Interactive Computing\n\nOverview\nWelcome to IPython.  Our full documentation is available on ipython.readthedocs.io and contains information on how to install, use, and\ncontribute to the project.\nIPython (Interactive Python) is a command shell for interactive computing in multiple programming languages, originally developed for the Python programming language, that offers introspection, rich media, shell syntax, tab completion, and history.\nIPython versions and Python Support\nStarting with IPython 7.10, IPython follows NEP 29\nIPython 7.17+ requires Python version 3.7 and above.\nIPython 7.10+ requires Python version 3.6 and above.\nIPython 7.0 requires Python version 3.5 and above.\nIPython 6.x requires Python version 3.3 and above.\nIPython 5.x LTS is the compatible release for Python 2.7.\nIf you require Python 2 support, you must use IPython 5.x LTS. Please\nupdate your project configurations and requirements as necessary.\nThe Notebook, Qt console and a number of other pieces are now parts of Jupyter.\nSee the Jupyter installation docs\nif you want to use these.\n\nMain features of IPython\nComprehensive object introspection.\nInput history, persistent across sessions.\nCaching of output results during a session with automatically generated references.\nExtensible tab completion, with support by default for completion of python variables and keywords, filenames and function keywords.\nExtensible system of \u2018magic\u2019 commands for controlling the environment and performing many tasks related to IPython or the operating system.\nA rich configuration system with easy switching between different setups (simpler than changing $PYTHONSTARTUP environment variables every time).\nSession logging and reloading.\nExtensible syntax processing for special purpose situations.\nAccess to the system shell with user-extensible alias system.\nEasily embeddable in other Python programs and GUIs.\nIntegrated access to the pdb debugger and the Python profiler.\n\nDevelopment and Instant running\nYou can find the latest version of the development documentation on readthedocs.\nYou can run IPython from this directory without even installing it system-wide\nby typing at the terminal:\n$ python -m IPython\n\nOr see the development installation docs\nfor the latest revision on read the docs.\nDocumentation and installation instructions for older version of IPython can be\nfound on the IPython website\n\nIPython requires Python version 3 or above\nStarting with version 6.0, IPython does not support Python 2.7, 3.0, 3.1, or\n3.2.\nFor a version compatible with Python 2.7, please install the 5.x LTS Long Term\nSupport version.\nIf you are encountering this error message you are likely trying to install or\nuse IPython from source. You need to checkout the remote 5.x branch. If you are\nusing git the following should work:\n$ git fetch origin\n$ git checkout 5.x\n\nIf you encounter this error message with a regular install of IPython, then you\nlikely need to update your package manager, for example if you are using pip\ncheck the version of pip with:\n$ pip --version\n\nYou will need to update pip to the version 9.0.1 or greater. If you are not using\npip, please inquiry with the maintainers of the package for your package\nmanager.\nFor more information see one of our blog posts:\n\nhttps://blog.jupyter.org/release-of-ipython-5-0-8ce60b8d2e8e\nAs well as the following Pull-Request for discussion:\n\n#9900\nThis error does also occur if you are invoking setup.py directly \u2013\u00a0which you\nshould not \u2013\u00a0or are using easy_install If this is the case, use pip\ninstall . instead of setup.py install , and pip install -e . instead\nof setup.py develop If you are depending on IPython as a dependency you may\nalso want to have a conditional dependency on IPython depending on the Python\nversion:\ninstall_req = ['ipython']\nif sys.version_info[0] < 3 and 'bdist_wheel' not in sys.argv:\n    install_req.remove('ipython')\n    install_req.append('ipython<6')\n\nsetup(\n    ...\n    install_requires=install_req\n)\n\n\nAlternatives to IPython\nIPython may not be to your taste; if that's the case there might be similar\nproject that you might want to use:\n\nThe classic Python REPL.\nbpython\nmypython\nptpython and ptipython\nXonsh\n\n\nIgnoring commits with git blame.ignoreRevsFile\nAs of git 2.23, it is possible to make formatting changes without breaking\ngit blame. See the git documentation\nfor more details.\nTo use this feature you must:\n\nInstall git >= 2.23\n\nConfigure your local git repo by running:\n\nPOSIX: tools\\configure-git-blame-ignore-revs.sh\nWindows:  tools\\configure-git-blame-ignore-revs.bat\n\n\n\n\n\n\n\n", "description": "Enhanced interactive Python shell."}, {"name": "ipython-genutils", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n"}, {"name": "ipykernel", "readme": "\nIPython Kernel for Jupyter\n\n\nThis package provides the IPython kernel for Jupyter.\nInstallation from source\n\ngit clone\ncd ipykernel\npip install -e \".[test]\"\n\nAfter that, all normal ipython commands will use this newly-installed version of the kernel.\nRunning tests\nFollow the instructions from Installation from source.\nand then from the root directory\npytest ipykernel\n\nRunning tests with coverage\nFollow the instructions from Installation from source.\nand then from the root directory\npytest ipykernel -vv -s --cov ipykernel --cov-branch --cov-report term-missing:skip-covered --durations 10\n\nAbout the IPython Development Team\nThe IPython Development Team is the set of all contributors to the IPython project.\nThis includes all of the IPython subprojects.\nThe core team that coordinates development on GitHub can be found here:\nhttps://github.com/ipython/.\nOur Copyright Policy\nIPython uses a shared copyright model. Each contributor maintains copyright\nover their contributions to IPython. But, it is important to note that these\ncontributions are typically only changes to the repositories. Thus, the IPython\nsource code, in its entirety is not the copyright of any single person or\ninstitution. Instead, it is the collective copyright of the entire IPython\nDevelopment Team. If individual contributors want to maintain a record of what\nchanges/contributions they have specific copyright on, they should indicate\ntheir copyright in the commit message of the change, when they commit the\nchange to one of the IPython repositories.\nWith this in mind, the following banner should be used in any source code file\nto indicate the copyright and license terms:\n# Copyright (c) IPython Development Team.\n# Distributed under the terms of the Modified BSD License.\n\n"}, {"name": "iniconfig", "readme": "\n\n\n\n\n\n\n\n\n\n\n\niniconfig: brain-dead simple parsing of ini files\nBasic Example\n\n\n\n\n\nREADME.rst\n\n\n\n\n\niniconfig: brain-dead simple parsing of ini files\niniconfig is a small and simple INI-file parser module\nhaving a unique set of features:\n\nmaintains order of sections and entries\nsupports multi-line values with or without line-continuations\nsupports \"#\" comments everywhere\nraises errors with proper line-numbers\nno bells and whistles like automatic substitutions\niniconfig raises an Error if two sections have the same name.\n\nIf you encounter issues or have feature wishes please report them to:\n\nhttps://github.com/RonnyPfannschmidt/iniconfig/issues\n\nBasic Example\nIf you have an ini file like this:\n# content of example.ini\n[section1] # comment\nname1=value1  # comment\nname1b=value1,value2  # comment\n\n[section2]\nname2=\n    line1\n    line2\nthen you can do:\n>>> import iniconfig\n>>> ini = iniconfig.IniConfig(\"example.ini\")\n>>> ini['section1']['name1'] # raises KeyError if not exists\n'value1'\n>>> ini.get('section1', 'name1b', [], lambda x: x.split(\",\"))\n['value1', 'value2']\n>>> ini.get('section1', 'notexist', [], lambda x: x.split(\",\"))\n[]\n>>> [x.name for x in list(ini)]\n['section1', 'section2']\n>>> list(list(ini)[0].items())\n[('name1', 'value1'), ('name1b', 'value1,value2')]\n>>> 'section1' in ini\nTrue\n>>> 'inexistendsection' in ini\nFalse\n\n\n", "description": "Brain-dead simple INI-file parsing."}, {"name": "importlib-resources", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nCompatibility\nFor Enterprise\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nimportlib_resources is a backport of Python standard library\nimportlib.resources\nmodule for older Pythons.\nThe key goal of this module is to replace parts of pkg_resources with a\nsolution in Python's stdlib that relies on well-defined APIs.  This makes\nreading resources included in packages easier, with more stable and consistent\nsemantics.\n\nCompatibility\nNew features are introduced in this third-party library and later merged\ninto CPython. The following table indicates which versions of this library\nwere contributed to different versions in the standard library:\n\n\nimportlib_resources\nstdlib\n\n\n\n6.0\n3.13\n\n5.12\n3.12\n\n5.7\n3.11\n\n5.0\n3.10\n\n1.3\n3.9\n\n0.5 (?)\n3.7\n\n\n\n\nFor Enterprise\nAvailable as part of the Tidelift Subscription.\nThis project and the maintainers of thousands of other packages are working with Tidelift to deliver one enterprise subscription that covers all of the open source you use.\nLearn more.\n\n\n"}, {"name": "importlib-metadata", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nCompatibility\nUsage\nCaveats\nProject details\nFor Enterprise\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nLibrary to access the metadata for a Python package.\nThis package supplies third-party access to the functionality of\nimportlib.metadata\nincluding improvements added to subsequent Python versions.\n\nCompatibility\nNew features are introduced in this third-party library and later merged\ninto CPython. The following table indicates which versions of this library\nwere contributed to different versions in the standard library:\n\n\nimportlib_metadata\nstdlib\n\n\n\n6.5\n3.12\n\n4.13\n3.11\n\n4.6\n3.10\n\n1.4\n3.8\n\n\n\n\nUsage\nSee the online documentation\nfor usage details.\nFinder authors can\nalso add support for custom package installers.  See the above documentation\nfor details.\n\nCaveats\nThis project primarily supports third-party packages installed by PyPA\ntools (or other conforming packages). It does not support:\n\nPackages in the stdlib.\nPackages installed without metadata.\n\n\nProject details\n\n\nProject home: https://github.com/python/importlib_metadata\nReport bugs at: https://github.com/python/importlib_metadata/issues\nCode hosting: https://github.com/python/importlib_metadata\nDocumentation: https://importlib-metadata.readthedocs.io/\n\n\n\nFor Enterprise\nAvailable as part of the Tidelift Subscription.\nThis project and the maintainers of thousands of other packages are working with Tidelift to deliver one enterprise subscription that covers all of the open source you use.\nLearn more.\n\n\n"}, {"name": "imgkit", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nIMGKit: Python library of HTML to IMG wrapper\nInstallation\nUsage\nConfiguration\nTroubleshooting\nCredit\nIMGKit author\nContributors\nChange log\n\n\n\n\n\nREADME.md\n\n\n\n\nIMGKit: Python library of HTML to IMG wrapper\n\n\n\n\n\n  _____   __  __    _____   _  __  _   _\n |_   _| |  \\/  |  / ____| | |/ / (_) | |\n   | |   | \\  / | | |  __  | ' /   _  | |_\n   | |   | |\\/| | | | |_ | |  <   | | | __|\n  _| |_  | |  | | | |__| | | . \\  | | | |_\n |_____| |_|  |_|  \\_____| |_|\\_\\ |_|  \\__|\n\n\nPython 2 and 3 wrapper for wkhtmltoimage utility to convert HTML to IMG using Webkit.\nInstallation\n\n\nInstall imgkit:\npip install imgkit\n\n\nInstall wkhtmltopdf:\n\n\nDebian/Ubuntu:\nsudo apt-get install wkhtmltopdf\nWarning! Version in debian/ubuntu repos have reduced functionality (because it compiled without the wkhtmltopdf QT patches), such as adding outlines, headers, footers, TOC etc. To use this options you should install static binary from wkhtmltopdf site or you can use this script.\n\n\nMacOSX:\nbrew install --cask wkhtmltopdf\n\n\nWindows and other options:\nCheck wkhtmltopdf homepage for binary installers or wiki page.\n\n\n\n\nUsage\nSimple example:\nimport imgkit\n\nimgkit.from_url('http://google.com', 'out.jpg')\nimgkit.from_file('test.html', 'out.jpg')\nimgkit.from_string('Hello!', 'out.jpg')\nAlso you can pass an opened file:\nwith open('file.html') as f:\n    imgkit.from_file(f, 'out.jpg')\nIf you wish to further process generated IMG, you can read it to a variable:\n# Use False instead of output path to save pdf to a variable\nimg = imgkit.from_url('http://google.com', False)\nYou can find all wkhtmltoimage options by type wkhtmltoimage command or visit this Manual. You can drop '--' in option name. If option without value, use None, False or '' for dict value:. For repeatable options (incl. allow, cookie, custom-header, post, postfile, run-script, replace) you may use a list or a tuple. With option that need multiple values (e.g. --custom-header Authorization secret) we may use a 2-tuple (see example below).\noptions = {\n    'format': 'png',\n    'crop-h': '3',\n    'crop-w': '3',\n    'crop-x': '3',\n    'crop-y': '3',\n    'encoding': \"UTF-8\",\n    'custom-header' : [\n        ('Accept-Encoding', 'gzip')\n    ],\n    'cookie': [\n        ('cookie-name1', 'cookie-value1'),\n        ('cookie-name2', 'cookie-value2'),\n    ],\n    'no-outline': None\n}\n\nimgkit.from_url('http://google.com', 'out.png', options=options)\nAt some headless servers, perhaps you need to install xvfb:\n# at ubuntu server, etc.\nsudo apt-get install xvfb\n# at centos server, etc.\nyum install xorg-x11-server-Xvfb\nThen use IMGKit with option xvfb: {\"xvfb\": \"\"}.\nBy default, IMGKit will show all wkhtmltoimage output. If you don't want it, you need to pass quiet option:\noptions = {\n    'quiet': ''\n    }\n\nimgkit.from_url('google.com', 'out.jpg', options=options)\nDue to wkhtmltoimage command syntax, TOC and Cover options must be specified separately. If you need cover before TOC, use cover_first option:\ntoc = {\n    'xsl-style-sheet': 'toc.xsl'\n}\n\ncover = 'cover.html'\n\nimgkit.from_file('file.html', options=options, toc=toc, cover=cover)\nimgkit.from_file('file.html', options=options, toc=toc, cover=cover, cover_first=True)\nYou can specify external CSS files when converting files or strings using css option.\n# Single CSS file\ncss = 'example.css'\nimgkit.from_file('file.html', options=options, css=css)\n\n# Multiple CSS files\ncss = ['example.css', 'example2.css']\nimgkit.from_file('file.html', options=options, css=css)\nYou can also pass any options through meta tags in your HTML:\nbody = \"\"\"\n<html>\n  <head>\n    <meta name=\"imgkit-format\" content=\"png\"/>\n    <meta name=\"imgkit-orientation\" content=\"Landscape\"/>\n  </head>\n  Hello World!\n</html>\n\"\"\"\n\nimgkit.from_string(body, 'out.png')\nConfiguration\nEach API call takes an optional config paramater. This should be an instance of imgkit.config() API call. It takes the config options as initial paramaters. The available options are:\n\nwkhtmltoimage - the location of the wkhtmltoimage binary. By default imgkit will attempt to locate this using which (on UNIX type systems) or where (on Windows).\nxvfb - the location of the xvfb-run binary. By default imgkit will attempt to locate this using which (on UNIX type systems) or where (on Windows).\nmeta_tag_prefix - the prefix for imgkit specific meta tags - by default this is imgkit-\n\nExample - for when wkhtmltopdf or xvfb is not in $PATH:\nconfig = imgkit.config(wkhtmltoimage='/opt/bin/wkhtmltoimage', xvfb='/opt/bin/xvfb-run')\nimgkit.from_string(html_string, output_file, config=config)\nTroubleshooting\n\n\nIOError: 'No wkhtmltopdf executable found':\nMake sure that you have wkhtmltoimage in your $PATH or set via custom configuration (see preceding section). where wkhtmltoimage in Windows or which wkhtmltoimage on Linux should return actual path to binary.\n\n\nIOError: 'No xvfb executable found':\nMake sure that you have xvfb-run in your $PATH or set via custom configuration (see preceding section). where xvfb in Windows or which xvfb-run or which Xvfb on Linux should return actual path to binary.\n\n\nIOError: 'Command Failed':\nThis error means that IMGKit was unable to process an input. You can try to directly run a command from error message and see what error caused failure (on some wkhtmltoimage versions this can be cause by segmentation faults)\n\n\nCredit\npython PDFKit\nIMGKit author\n\njarrekk https://github.com/jarrekk\n\nContributors\n\nv-hunt https://github.com/v-hunt\narchydeberker https://github.com/archydeberker\narayate https://github.com/arayate\nxtrntr https://github.com/xtrntr\nmike1703 https://github.com/mike1703\nthemeewa https://github.com/themeewa\n\nChange log\nGo to https://github.com/jarrekk/imgkit/wiki/CHANGE-LOG.\n\n\n", "description": "HTML to image conversion using wkhtmltoimage."}, {"name": "IMAPClient", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nEssentials\nFeatures\nExample\nWhy IMAPClient?\nInstalling IMAPClient\nDocumentation\nCurrent Status\nDiscussions\nWorking on IMAPClient\nIMAP Servers\nInteractive Console\n\"Live\" Tests\n\n\n\n\n\nREADME.rst\n\n\n\n\n\nEssentials\nIMAPClient is an easy-to-use, Pythonic and complete IMAP client\nlibrary.\n\n\nCurrent version\n2.3.1\n\nSupported Python versions\n3.7 - 3.11\n\nLicense\nNew BSD\n\nProject home\nhttps://github.com/mjs/imapclient/\n\nPyPI\nhttps://pypi.python.org/pypi/IMAPClient\n\nDocumentation\nhttps://imapclient.readthedocs.io/\n\nDiscussions\nhttps://github.com/mjs/imapclient/discussions\n\nTest Status\n\n\n\n\n\n\nFeatures\n\nArguments and return values are natural Python types.\nIMAP server responses are fully parsed and readily usable.\nIMAP unique message IDs (UIDs) are handled transparently. There is\nno need to call different methods to use UIDs.\nEscaping for internationalised mailbox names is transparently\nhandled.  Unicode mailbox names may be passed as input wherever a\nfolder name is accepted.\nTime zones are transparently handled including when the server and\nclient are in different zones.\nConvenience methods are provided for commonly used functionality.\nExceptions are raised when errors occur.\n\n\nExample\nfrom imapclient import IMAPClient\n\n# context manager ensures the session is cleaned up\nwith IMAPClient(host=\"imap.host.org\") as client:\n    client.login('someone', 'secret')\n    client.select_folder('INBOX')\n\n    # search criteria are passed in a straightforward way\n    # (nesting is supported)\n    messages = client.search(['NOT', 'DELETED'])\n\n    # fetch selectors are passed as a simple list of strings.\n    response = client.fetch(messages, ['FLAGS', 'RFC822.SIZE'])\n\n    # `response` is keyed by message id and contains parsed,\n    # converted response items.\n    for message_id, data in response.items():\n        print('{id}: {size} bytes, flags={flags}'.format(\n            id=message_id,\n            size=data[b'RFC822.SIZE'],\n            flags=data[b'FLAGS']))\n\nWhy IMAPClient?\nYou may ask: \"why create another IMAP client library for Python?\nDoesn't the Python standard library already have imaplib?\".\nThe problem with imaplib is that it's very low-level. It expects\nstring values where lists or tuples would be more appropriate and\nreturns server responses almost unparsed. As IMAP server responses can\nbe quite complex this means everyone using imaplib ends up writing\ntheir own fragile parsing routines.\nAlso, imaplib doesn't make good use of exceptions. This means you need\nto check the return value of each call to imaplib to see if what you\njust did was successful.\nIMAPClient actually uses imaplib internally. This may change at some\npoint in the future.\n\nInstalling IMAPClient\nIMAPClient is listed on PyPI and can be installed with pip:\npip install imapclient\n\nMore installation methods are described in the documentation.\n\nDocumentation\nIMAPClient's manual is available at http://imapclient.readthedocs.io/.\nRelease notes can be found at\nhttp://imapclient.readthedocs.io/#release-history.\nSee the examples directory in the root of project source for\nexamples of how to use IMAPClient.\n\nCurrent Status\nYou should feel confident using IMAPClient for production purposes.\nIn order to clearly communicate version compatibility, IMAPClient\nwill strictly adhere to the Semantic Versioning\nscheme from version 1.0 onwards.\nThe project's home page is https://github.com/mjs/imapclient/ (this\ncurrently redirects to the IMAPClient Github site). Details about\nupcoming versions and planned features/fixes can be found in the issue\ntracker on Github. The maintainers also blog about IMAPClient\nnews. Those articles can be found here.\n\nDiscussions\nGithub Discussions can be used to ask questions, propose changes or praise\nthe project maintainers :)\n\nWorking on IMAPClient\nThe contributing documentation contains\ninformation for those interested in improving IMAPClient.\n\nIMAP Servers\nIMAPClient is heavily tested against Dovecot, Gmail, Fastmail.fm\n(who use a modified Cyrus implementation), Office365 and Yahoo. Access\nto accounts on other IMAP servers/services for testing would be\ngreatly appreciated.\n\nInteractive Console\nThis script connects an IMAPClient instance using the command line\nargs given and starts an interactive session. This is useful for\nexploring the IMAPClient API and testing things out, avoiding the\nsteps required to set up an IMAPClient instance.\nThe IPython shell is used if it is installed. Otherwise the\ncode.interact() function from the standard library is used.\nThe interactive console functionality can be accessed running the\ninteract.py script in the root of the source tree or by invoking the\ninteract module like this:\npython -m imapclient.interact ...\n\n\n\"Live\" Tests\nIMAPClient includes a series of live, functional tests which exercise\nit against a live IMAP account. These are useful for ensuring\ncompatibility with a given IMAP server implementation.\nThe livetest functionality are run from the root of the project source\nlike this:\npython livetest.py <livetest.ini> [ optional unittest arguments ]\n\nThe configuration file format is\ndescribed in the main documentation.\nWARNING: The operations used by livetest are destructive and could\ncause unintended loss of data. That said, as of version 0.9, livetest\nlimits its activity to a folder it creates and subfolders of that\nfolder. It should be safe to use with any IMAP account but please\ndon't run livetest against a truly important IMAP account.\nPlease include the output of livetest.py with an issue if it fails\nto run successfully against a particular IMAP server. Reports of\nsuccessful runs are also welcome.  Please include the type and version\nof the IMAP server, if known.\n\n\n", "description": "Easy-to-use, Pythonic IMAP client library."}, {"name": "imageio", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nIMAGEIO\nExample\nAPI in a nutshell\nFeatures\nDependencies\nCiting imageio\nSecurity contact information\nImageIO for enterprise\nDetails\nContributing\n\n\n\n\n\nREADME.md\n\n\n\n\nIMAGEIO\n\n\n\n\n\n\n\n\nWebsite: https://imageio.readthedocs.io/\n\nImageio is a Python library that provides an easy interface to read and\nwrite a wide range of image data, including animated images, video,\nvolumetric data, and scientific formats. It is cross-platform, runs on\nPython 3.8+, and is easy to install.\n\n\n    Professional support is available via Tidelift.\n\nExample\nHere's a minimal example of how to use imageio. See the docs for\nmore examples.\nimport imageio.v3 as iio\nim = iio.imread('imageio:chelsea.png')  # read a standard image\nim.shape  # im is a NumPy array of shape (300, 451, 3)\niio.imwrite('chelsea.jpg', im)  # convert to jpg\nAPI in a nutshell\nAs a user, you just have to remember a handful of functions:\n\nimread() - for reading\nimwrite() - for writing\nimiter() - for iterating image series (animations/videos/OME-TIFF/...)\nimprops() - for standardized metadata\nimmeta() - for format-specific metadata\nimopen() - for advanced usage\n\nSee the API docs for more information.\nFeatures\n\nSimple interface via a concise set of functions\nEasy to install using Conda or pip\nFew dependencies (only NumPy and Pillow)\nPure Python, runs on Python 3.8+, and PyPy\nCross platform, runs on Windows, Linux, macOS\nMore than 295 supported formats\nRead/Write support for various resources (files, URLs, bytes, FileLike objects, ...)\nCode quality is maintained via continuous integration and continuous deployment\n\nDependencies\nMinimal requirements:\n\nPython 3.8+\nNumPy\nPillow >= 8.3.2\n\nOptional Python packages:\n\nimageio-ffmpeg (for working with video files)\npyav (for working with video files)\ntifffile (for working with TIFF files)\nitk or SimpleITK (for ITK plugin)\nastropy (for FITS plugin)\nimageio-flif (for working with FLIF image files)\n\nCiting imageio\n\nIf you use imageio for scientific work, we would appreciate a citation.\nWe have a DOI!\n\nSecurity contact information\nTo report a security vulnerability, please use the\nTidelift security contact.\nTidelift will coordinate the fix and disclosure.\nImageIO for enterprise\nAvailable as part of the Tidelift Subscription.\nThe maintainers of imageio and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use.\nLearn more.\nDetails\n\n    The core of ImageIO is a set of user-facing APIs combined with a plugin manager. API calls choose sensible defaults and then call the plugin manager, which deduces the correct plugin/backend to use for the given resource and file format. The plugin manager then adds sensible backend-specific defaults and then calls one of ImageIOs many backends to perform the actual loading. This allows ImageIO to take care of most of the gory details of loading images for you, while still allowing you to customize the behavior when and where you need to. You can find a more detailed explanation of this process in our documentation.\nContributing\nWe welcome contributions of any kind. Here are some suggestions on how you are able to contribute\n\nadd missing formats to the format list\nsuggest/implement support for new backends\nreport/fix any bugs you encounter while using ImageIO\n\nTo assist you in getting started with contributing code, take a look at the development section of the docs. You will find instructions on setting up the dev environment as well as examples on how to contribute code.\n\n\n", "description": "ffmpeg - FFmpeg wrapper for Python to read and write video frames using generators."}, {"name": "imageio-ffmpeg", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nimageio-ffmpeg\nPurpose\nInstallation\nExample usage\nHow it works\nimageio-ffmpeg for enterprise\nSecurity contact information\nEnvironment variables\nDevelopers\nAPI\n\n\n\n\n\nREADME.md\n\n\n\n\nimageio-ffmpeg\n\n\nFFMPEG wrapper for Python\nPurpose\nThe purpose of this project is to provide a simple and reliable ffmpeg\nwrapper for working with video files. It implements two simple generator\nfunctions for reading and writing data from/to ffmpeg, which reliably\nterminate the ffmpeg process when done. It also takes care of publishing\nplatform-specific wheels that include the binary ffmpeg executables.\nThis library is used as the basis for the\nimageio\nffmpeg plugin,\nbut it can also be used by itself. Imageio provides a higher level API,\nand adds support for e.g. cameras and seeking.\nInstallation\nThis library works with any version of Python 3.5+ (including Pypy).\nThere are no further dependencies. The wheels on Pypi include the ffmpeg\nexecutable for all common platforms (Windows 7+, Linux kernel 2.6.32+,\nOSX 10.9+). Install using:\n$ pip install --upgrade imageio-ffmpeg\n\n(On Linux you may want to first pip install -U pip, since pip 19 is needed to detect the manylinux2010 wheels.)\nIf you're using a Conda environment: the conda package does not include\nthe ffmpeg executable, but instead depends on the ffmpeg package from\nconda-forge. Install using:\n$ conda install imageio-ffmpeg -c conda-forge\n\nIf you don't want to install the included ffmpeg, you can use pip with\n--no-binary or conda with --no-deps. Then use the\nIMAGEIO_FFMPEG_EXE environment variable if needed.\nExample usage\nThe imageio_ffmpeg library provides low level functionality to read\nand write video data, using Python generators:\n# Read a video file\nreader = read_frames(path)\nmeta = reader.__next__()  # meta data, e.g. meta[\"size\"] -> (width, height)\nfor frame in reader:\n    ... # each frame is a bytes object\n\n# Write a video file\nwriter = write_frames(path, size)  # size is (width, height)\nwriter.send(None)  # seed the generator\nfor frame in frames:\n    writer.send(frame)\nwriter.close()  # don't forget this\n(Also see the API section further down.)\nHow it works\nThis library calls ffmpeg in a subprocess, and video frames are\ncommunicated over pipes. This is certainly not the fastest way to\nuse ffmpeg, but it makes it possible to wrap ffmpeg with pure Python,\nmaking distribution and installation much easier. And probably\nthe code itself too. In contrast, PyAV\nwraps ffmpeg at the C level.\nNote that because of how imageio-ffmpeg works, read_frames() and\nwrite_frames() only accept file names, and not file (like) objects.\nimageio-ffmpeg for enterprise\nAvailable as part of the Tidelift Subscription\nThe maintainers of imageio-ffmpeg and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. Learn more.\nSecurity contact information\nTo report a security vulnerability, please use the\nTidelift security contact.\nTidelift will coordinate the fix and disclosure.\nEnvironment variables\nThe library can be configured at runtime by setting the following environment\nvariables:\n\nIMAGEIO_FFMPEG_EXE=[file name] -- override the ffmpeg executable;\nIMAGEIO_FFMPEG_NO_PREVENT_SIGINT=1 -- don't prevent propagation of SIGINT\nto the ffmpeg process.\n\nDevelopers\nDev deps:\npip install invoke black flake8\n\nWe use invoke:\ninvoke autoformat\ninvoke lint\ninvoke -l  # to get a list of all tasks\ninvoke update-readme  # after changes to the docstrings\n\nAPI\ndef read_frames(\n    path,\n    pix_fmt=\"rgb24\",\n    bpp=None,\n    input_params=None,\n    output_params=None,\n    bits_per_pixel=None,\n):\n    \"\"\"\n    Create a generator to iterate over the frames in a video file.\n\n    It first yields a small metadata dictionary that contains:\n\n    * ffmpeg_version: the ffmpeg version in use (as a string).\n    * codec: a hint about the codec used to encode the video, e.g. \"h264\".\n    * source_size: the width and height of the encoded video frames.\n    * size: the width and height of the frames that will be produced.\n    * fps: the frames per second. Can be zero if it could not be detected.\n    * duration: duration in seconds. Can be zero if it could not be detected.\n\n    After that, it yields frames until the end of the video is reached. Each\n    frame is a bytes object.\n\n    This function makes no assumptions about the number of frames in\n    the data. For one because this is hard to predict exactly, but also\n    because it may depend on the provided output_params. If you want\n    to know the number of frames in a video file, use count_frames_and_secs().\n    It is also possible to estimate the number of frames from the fps and\n    duration, but note that even if both numbers are present, the resulting\n    value is not always correct.\n\n    Example:\n\n        gen = read_frames(path)\n        meta = gen.__next__()\n        for frame in gen:\n            print(len(frame))\n\n    Parameters:\n        path (str): the filename of the file to read from.\n        pix_fmt (str): the pixel format of the frames to be read.\n            The default is \"rgb24\" (frames are uint8 RGB images).\n        input_params (list): Additional ffmpeg input command line parameters.\n        output_params (list): Additional ffmpeg output command line parameters.\n        bits_per_pixel (int): The number of bits per pixel in the output frames.\n            This depends on the given pix_fmt. Default is 24 (RGB)\n        bpp (int): DEPRECATED, USE bits_per_pixel INSTEAD. The number of bytes per pixel in the output frames.\n            This depends on the given pix_fmt. Some pixel formats like yuv420p have 12 bits per pixel\n            and cannot be set in bytes as integer. For this reason the bpp argument is deprecated.\n    \"\"\"\ndef write_frames(\n    path,\n    size,\n    pix_fmt_in=\"rgb24\",\n    pix_fmt_out=\"yuv420p\",\n    fps=16,\n    quality=5,\n    bitrate=None,\n    codec=None,\n    macro_block_size=16,\n    ffmpeg_log_level=\"warning\",\n    ffmpeg_timeout=None,\n    input_params=None,\n    output_params=None,\n    audio_path=None,\n    audio_codec=None,\n):\n    \"\"\"\n    Create a generator to write frames (bytes objects) into a video file.\n\n    The frames are written by using the generator's `send()` method. Frames\n    can be anything that can be written to a file. Typically these are\n    bytes objects, but c-contiguous Numpy arrays also work.\n\n    Example:\n\n        gen = write_frames(path, size)\n        gen.send(None)  # seed the generator\n        for frame in frames:\n            gen.send(frame)\n        gen.close()  # don't forget this\n\n    Parameters:\n        path (str): the filename to write to.\n        size (tuple): the width and height of the frames.\n        pix_fmt_in (str): the pixel format of incoming frames.\n            E.g. \"gray\", \"gray8a\", \"rgb24\", or \"rgba\". Default \"rgb24\".\n        pix_fmt_out (str): the pixel format to store frames. Default yuv420p\".\n        fps (float): The frames per second. Default 16.\n        quality (float): A measure for quality between 0 and 10. Default 5.\n            Ignored if bitrate is given.\n        bitrate (str): The bitrate, e.g. \"192k\". The defaults are pretty good.\n        codec (str): The codec. Default \"libx264\" for .mp4 (if available from\n            the ffmpeg executable) or \"msmpeg4\" for .wmv.\n        macro_block_size (int): You probably want to align the size of frames\n            to this value to avoid image resizing. Default 16. Can be set\n            to 1 to avoid block alignment, though this is not recommended.\n        ffmpeg_log_level (str): The ffmpeg logging level. Default \"warning\".\n        ffmpeg_timeout (float): Timeout in seconds to wait for ffmpeg process\n            to finish. Value of 0 or None will wait forever (default). The time that\n            ffmpeg needs depends on CPU speed, compression, and frame size.\n        input_params (list): Additional ffmpeg input command line parameters.\n        output_params (list): Additional ffmpeg output command line parameters.\n        audio_path (str): A input file path for encoding with an audio stream.\n            Default None, no audio.\n        audio_codec (str): The audio codec to use if audio_path is provided.\n            \"copy\" will try to use audio_path's audio codec without re-encoding.\n            Default None, but some formats must have certain codecs specified.\n    \"\"\"\ndef count_frames_and_secs(path):\n    \"\"\"\n    Get the number of frames and number of seconds for the given video\n    file. Note that this operation can be quite slow for large files.\n\n    Disclaimer: I've seen this produce different results from actually reading\n    the frames with older versions of ffmpeg (2.x). Therefore I cannot say\n    with 100% certainty that the returned values are always exact.\n    \"\"\"\ndef get_ffmpeg_exe():\n    \"\"\"\n    Get the ffmpeg executable file. This can be the binary defined by\n    the IMAGEIO_FFMPEG_EXE environment variable, the binary distributed\n    with imageio-ffmpeg, an ffmpeg binary installed with conda, or the\n    system ffmpeg (in that order). A RuntimeError is raised if no valid\n    ffmpeg could be found.\n    \"\"\"\ndef get_ffmpeg_version():\n    \"\"\"\n    Get the version of the used ffmpeg executable (as a string).\n    \"\"\"\n\n\n"}, {"name": "hyperframe", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nhyperframe: Pure-Python HTTP/2 framing\nContributing\nLicense\nAuthors\n\n\n\n\n\nREADME.rst\n\n\n\n\nhyperframe: Pure-Python HTTP/2 framing\n\n\n\n\n\n\n\nThis library contains the HTTP/2 framing code used in the hyper project. It\nprovides a pure-Python codebase that is capable of decoding a binary stream\ninto HTTP/2 frames.\nThis library is used directly by hyper and a number of other projects to\nprovide HTTP/2 frame decoding logic.\n\nContributing\nhyperframe welcomes contributions from anyone! Unlike many other projects we\nare happy to accept cosmetic contributions and small contributions, in addition\nto large feature requests and changes.\nBefore you contribute (either by opening an issue or filing a pull request),\nplease read the contribution guidelines.\n\nLicense\nhyperframe is made available under the MIT License. For more details, see the\nLICENSE file in the repository.\n\nAuthors\nhyperframe is maintained by Cory Benfield, with contributions from others. For\nmore details about the contributors, please see CONTRIBUTORS.rst.\n\n\n", "description": "HTTP/2 framing layer for Python."}, {"name": "hypercorn", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nHypercorn\nQuickstart\nContributing\nTesting\nHelp\n\n\n\n\n\nREADME.rst\n\n\n\n\nHypercorn\n\n\n \n \n \n \n \n\nHypercorn is an ASGI and\nWSGI web server based on the sans-io hyper, h11, h2, and wsproto libraries and inspired by\nGunicorn. Hypercorn supports HTTP/1, HTTP/2, WebSockets (over HTTP/1\nand HTTP/2), ASGI, and WSGI specifications. Hypercorn can utilise\nasyncio, uvloop, or trio worker types.\nHypercorn can optionally serve the current draft of the HTTP/3\nspecification using the aioquic library. To enable this install\nthe h3 optional extra, pip install hypercorn[h3] and then\nchoose a quic binding e.g. hypercorn --quic-bind localhost:4433\n....\nHypercorn was initially part of Quart before being separated out into a\nstandalone server. Hypercorn forked from version 0.5.0 of Quart.\n\nQuickstart\nHypercorn can be installed via pip,\n$ pip install hypercorn\nand requires Python 3.7.0 or higher.\nWith hypercorn installed ASGI frameworks (or apps) can be served via\nHypercorn via the command line,\n$ hypercorn module:app\nAlternatively Hypercorn can be used programatically,\nimport asyncio\nfrom hypercorn.config import Config\nfrom hypercorn.asyncio import serve\n\nfrom module import app\n\nasyncio.run(serve(app, Config()))\nlearn more (including a Trio example of the above) in the API usage\ndocs.\n\nContributing\nHypercorn is developed on Github. If you come across an issue,\nor have a feature request please open an issue.  If you want to\ncontribute a fix or the feature-implementation please do (typo fixes\nwelcome), by proposing a pull request.\n\nTesting\nThe best way to test Hypercorn is with Tox,\n$ pipenv install tox\n$ tox\nthis will check the code style and run the tests.\n\nHelp\nThe Hypercorn documentation is\nthe best place to start, after that try searching stack overflow, if\nyou still can't find an answer please open an issue.\n\n\n", "description": "HTTP/1, HTTP/2, and Websocket ASGI server based on Hyper libraries and inspired by Gunicorn."}, {"name": "httpx", "readme": "\n\n\n\nHTTPX - A next-generation HTTP client for Python.\n\n\n\n\n\n\n\n\nHTTPX is a fully featured HTTP client library for Python 3. It includes an integrated\ncommand line client, has support for both HTTP/1.1 and HTTP/2, and provides both sync\nand async APIs.\n\nInstall HTTPX using pip:\n$ pip install httpx\n\nNow, let's get started:\n>>> import httpx\n>>> r = httpx.get('https://www.example.org/')\n>>> r\n<Response [200 OK]>\n>>> r.status_code\n200\n>>> r.headers['content-type']\n'text/html; charset=UTF-8'\n>>> r.text\n'<!doctype html>\\n<html>\\n<head>\\n<title>Example Domain</title>...'\n\nOr, using the command-line client.\n$ pip install 'httpx[cli]'  # The command line client is an optional dependency.\n\nWhich now allows us to use HTTPX directly from the command-line...\n\n\n\nSending a request...\n\n\n\nFeatures\nHTTPX builds on the well-established usability of requests, and gives you:\n\nA broadly requests-compatible API.\nAn integrated command-line client.\nHTTP/1.1 and HTTP/2 support.\nStandard synchronous interface, but with async support if you need it.\nAbility to make requests directly to WSGI applications or ASGI applications.\nStrict timeouts everywhere.\nFully type annotated.\n100% test coverage.\n\nPlus all the standard features of requests...\n\nInternational Domains and URLs\nKeep-Alive & Connection Pooling\nSessions with Cookie Persistence\nBrowser-style SSL Verification\nBasic/Digest Authentication\nElegant Key/Value Cookies\nAutomatic Decompression\nAutomatic Content Decoding\nUnicode Response Bodies\nMultipart File Uploads\nHTTP(S) Proxy Support\nConnection Timeouts\nStreaming Downloads\n.netrc Support\nChunked Requests\n\nInstallation\nInstall with pip:\n$ pip install httpx\n\nOr, to include the optional HTTP/2 support, use:\n$ pip install httpx[http2]\n\nHTTPX requires Python 3.7+.\nDocumentation\nProject documentation is available at https://www.python-httpx.org/.\nFor a run-through of all the basics, head over to the QuickStart.\nFor more advanced topics, see the Advanced Usage section, the async support section, or the HTTP/2 section.\nThe Developer Interface provides a comprehensive API reference.\nTo find out about tools that integrate with HTTPX, see Third Party Packages.\nContribute\nIf you want to contribute with HTTPX check out the Contributing Guide to learn how to start.\nDependencies\nThe HTTPX project relies on these excellent libraries:\n\nhttpcore - The underlying transport implementation for httpx.\n\nh11 - HTTP/1.1 support.\n\n\ncertifi - SSL certificates.\nidna - Internationalized domain name support.\nsniffio - Async library autodetection.\n\nAs well as these optional installs:\n\nh2 - HTTP/2 support. (Optional, with httpx[http2])\nsocksio - SOCKS proxy support. (Optional, with httpx[socks])\nrich - Rich terminal support. (Optional, with httpx[cli])\nclick - Command line client support. (Optional, with httpx[cli])\nbrotli or brotlicffi - Decoding for \"brotli\" compressed responses. (Optional, with httpx[brotli])\n\nA huge amount of credit is due to requests for the API layout that\nmuch of this work follows, as well as to urllib3 for plenty of design\ninspiration around the lower-level networking details.\n\nHTTPX is BSD licensed code.Designed & crafted with care.\u2014 \ud83e\udd8b \u2014\nRelease Information\nAdded\n\nProvide additional context in some InvalidURL exceptions. (#2675)\n\nFixed\n\nFix optional percent-encoding behaviour. (#2671)\nMore robust checking for opening upload files in binary mode. (#2630)\nProperly support IP addresses in NO_PROXY environment variable. (#2659)\nSet default file for NetRCAuth() to None to use the stdlib default. (#2667)\nSet logging request lines to INFO level for async requests, in line with sync requests. (#2656)\nFix which gen-delims need to be escaped for path/query/fragment components in URL. (#2701)\n\n\nFull changelog\n", "description": "Fully featured HTTP client for Python 3."}, {"name": "httptools", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nAPIs\nDevelopment\nLicense\n\n\n\n\n\nREADME.md\n\n\n\n\n\nhttptools is a Python binding for the nodejs HTTP parser.\nThe package is available on PyPI: pip install httptools.\nAPIs\nhttptools contains two classes httptools.HttpRequestParser,\nhttptools.HttpResponseParser (fulfilled through\nllhttp) and a function for\nparsing URLs httptools.parse_url (through\nhttp-parse for now).\nSee unittests for examples.\nclass HttpRequestParser:\n\n    def __init__(self, protocol):\n        \"\"\"HttpRequestParser\n\n        protocol -- a Python object with the following methods\n        (all optional):\n\n          - on_message_begin()\n          - on_url(url: bytes)\n          - on_header(name: bytes, value: bytes)\n          - on_headers_complete()\n          - on_body(body: bytes)\n          - on_message_complete()\n          - on_chunk_header()\n          - on_chunk_complete()\n          - on_status(status: bytes)\n        \"\"\"\n\n    def get_http_version(self) -> str:\n        \"\"\"Return an HTTP protocol version.\"\"\"\n\n    def should_keep_alive(self) -> bool:\n        \"\"\"Return ``True`` if keep-alive mode is preferred.\"\"\"\n\n    def should_upgrade(self) -> bool:\n        \"\"\"Return ``True`` if the parsed request is a valid Upgrade request.\n\tThe method exposes a flag set just before on_headers_complete.\n\tCalling this method earlier will only yield `False`.\n\t\"\"\"\n\n    def feed_data(self, data: bytes):\n        \"\"\"Feed data to the parser.\n\n        Will eventually trigger callbacks on the ``protocol``\n        object.\n\n        On HTTP upgrade, this method will raise an\n        ``HttpParserUpgrade`` exception, with its sole argument\n        set to the offset of the non-HTTP data in ``data``.\n        \"\"\"\n\n    def get_method(self) -> bytes:\n        \"\"\"Return HTTP request method (GET, HEAD, etc)\"\"\"\n\n\nclass HttpResponseParser:\n\n    \"\"\"Has all methods except ``get_method()`` that\n    HttpRequestParser has.\"\"\"\n\n    def get_status_code(self) -> int:\n        \"\"\"Return the status code of the HTTP response\"\"\"\n\n\ndef parse_url(url: bytes):\n    \"\"\"Parse URL strings into a structured Python object.\n\n    Returns an instance of ``httptools.URL`` class with the\n    following attributes:\n\n      - schema: bytes\n      - host: bytes\n      - port: int\n      - path: bytes\n      - query: bytes\n      - fragment: bytes\n      - userinfo: bytes\n    \"\"\"\nDevelopment\n\n\nClone this repository with\ngit clone --recursive git@github.com:MagicStack/httptools.git\n\n\nCreate a virtual environment with Python 3:\npython3 -m venv envname\n\n\nActivate the environment with source envname/bin/activate\n\n\nInstall development requirements with pip install -e .[test]\n\n\nRun make and make test.\n\n\nLicense\nMIT.\n\n\n", "description": "Python bindings for nodejs HTTP parser with no dependencies."}, {"name": "httpcore", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nHTTP Core\nRequirements\nInstallation\nSending requests\nMotivation\n\n\n\n\n\nREADME.md\n\n\n\n\nHTTP Core\n\n\n\nDo one thing, and do it well.\n\nThe HTTP Core package provides a minimal low-level HTTP client, which does\none thing only. Sending HTTP requests.\nIt does not provide any high level model abstractions over the API,\ndoes not handle redirects, multipart uploads, building authentication headers,\ntransparent HTTP caching, URL parsing, session cookie handling,\ncontent or charset decoding, handling JSON, environment based configuration\ndefaults, or any of that Jazz.\nSome things HTTP Core does do:\n\nSending HTTP requests.\nThread-safe / task-safe connection pooling.\nHTTP(S) proxy & SOCKS proxy support.\nSupports HTTP/1.1 and HTTP/2.\nProvides both sync and async interfaces.\nAsync backend support for asyncio and trio.\n\nRequirements\nPython 3.8+\nInstallation\nFor HTTP/1.1 only support, install with:\n$ pip install httpcore\nFor HTTP/1.1 and HTTP/2 support, install with:\n$ pip install httpcore[http2]\nFor SOCKS proxy support, install with:\n$ pip install httpcore[socks]\nSending requests\nSend an HTTP request:\nimport httpcore\n\nresponse = httpcore.request(\"GET\", \"https://www.example.com/\")\n\nprint(response)\n# <Response [200]>\nprint(response.status)\n# 200\nprint(response.headers)\n# [(b'Accept-Ranges', b'bytes'), (b'Age', b'557328'), (b'Cache-Control', b'max-age=604800'), ...]\nprint(response.content)\n# b'<!doctype html>\\n<html>\\n<head>\\n<title>Example Domain</title>\\n\\n<meta charset=\"utf-8\"/>\\n ...'\nThe top-level httpcore.request() function is provided for convenience. In practice whenever you're working with httpcore you'll want to use the connection pooling functionality that it provides.\nimport httpcore\n\nhttp = httpcore.ConnectionPool()\nresponse = http.request(\"GET\", \"https://www.example.com/\")\nOnce you're ready to get going, head over to the documentation.\nMotivation\nYou probably don't want to be using HTTP Core directly. It might make sense if\nyou're writing something like a proxy service in Python, and you just want\nsomething at the lowest possible level, but more typically you'll want to use\na higher level client library, such as httpx.\nThe motivation for httpcore is:\n\nTo provide a reusable low-level client library, that other packages can then build on top of.\nTo provide a really clear interface split between the networking code and client logic,\nso that each is easier to understand and reason about in isolation.\n\n\n\n", "description": "Minimal low-level HTTP client."}, {"name": "html5lib", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nhtml5lib\nUsage\nInstallation\nOptional Dependencies\nBugs\nTests\nQuestions?\n\n\n\n\n\nREADME.rst\n\n\n\n\nhtml5lib\n\n\nhtml5lib is a pure-python library for parsing HTML. It is designed to\nconform to the WHATWG HTML specification, as is implemented by all major\nweb browsers.\n\nUsage\nSimple usage follows this pattern:\nimport html5lib\nwith open(\"mydocument.html\", \"rb\") as f:\n    document = html5lib.parse(f)\nor:\nimport html5lib\ndocument = html5lib.parse(\"<p>Hello World!\")\nBy default, the document will be an xml.etree element instance.\nWhenever possible, html5lib chooses the accelerated ElementTree\nimplementation (i.e. xml.etree.cElementTree on Python 2.x).\nTwo other tree types are supported: xml.dom.minidom and\nlxml.etree. To use an alternative format, specify the name of\na treebuilder:\nimport html5lib\nwith open(\"mydocument.html\", \"rb\") as f:\n    lxml_etree_document = html5lib.parse(f, treebuilder=\"lxml\")\nWhen using with urllib2 (Python 2), the charset from HTTP should be\npass into html5lib as follows:\nfrom contextlib import closing\nfrom urllib2 import urlopen\nimport html5lib\n\nwith closing(urlopen(\"http://example.com/\")) as f:\n    document = html5lib.parse(f, transport_encoding=f.info().getparam(\"charset\"))\nWhen using with urllib.request (Python 3), the charset from HTTP\nshould be pass into html5lib as follows:\nfrom urllib.request import urlopen\nimport html5lib\n\nwith urlopen(\"http://example.com/\") as f:\n    document = html5lib.parse(f, transport_encoding=f.info().get_content_charset())\nTo have more control over the parser, create a parser object explicitly.\nFor instance, to make the parser raise exceptions on parse errors, use:\nimport html5lib\nwith open(\"mydocument.html\", \"rb\") as f:\n    parser = html5lib.HTMLParser(strict=True)\n    document = parser.parse(f)\nWhen you're instantiating parser objects explicitly, pass a treebuilder\nclass as the tree keyword argument to use an alternative document\nformat:\nimport html5lib\nparser = html5lib.HTMLParser(tree=html5lib.getTreeBuilder(\"dom\"))\nminidom_document = parser.parse(\"<p>Hello World!\")\nMore documentation is available at https://html5lib.readthedocs.io/.\n\nInstallation\nhtml5lib works on CPython 2.7+, CPython 3.5+ and PyPy. To install:\n$ pip install html5lib\nThe goal is to support a (non-strict) superset of the versions that pip\nsupports.\n\nOptional Dependencies\nThe following third-party libraries may be used for additional\nfunctionality:\n\nlxml is supported as a tree format (for both building and\nwalking) under CPython (but not PyPy where it is known to cause\nsegfaults);\ngenshi has a treewalker (but not builder); and\nchardet can be used as a fallback when character encoding cannot\nbe determined.\n\n\nBugs\nPlease report any bugs on the issue tracker.\n\nTests\nUnit tests require the pytest and mock libraries and can be\nrun using the pytest command in the root directory.\nTest data are contained in a separate html5lib-tests repository and included\nas a submodule, thus for git checkouts they must be initialized:\n$ git submodule init\n$ git submodule update\n\nIf you have all compatible Python implementations available on your\nsystem, you can run tests on all of them using the tox utility,\nwhich can be found on PyPI.\n\nQuestions?\nCheck out the docs. Still\nneed help? Go to our GitHub Discussions.\nYou can also browse the archives of the html5lib-discuss mailing list.\n\n\n", "description": "HTML parser based on the WHATWG HTML specification."}, {"name": "hpack", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nhpack: HTTP/2 Header Encoding for Python\nDocumentation\nContributing\nLicense\nAuthors\n\n\n\n\n\nREADME.rst\n\n\n\n\nhpack: HTTP/2 Header Encoding for Python\n\n\n\n\n\n\n\n\nThis module contains a pure-Python HTTP/2 header encoding (HPACK) logic for use\nin Python programs that implement HTTP/2.\n\nDocumentation\nDocumentation is available at https://hpack.readthedocs.io .\n\nContributing\nhpack welcomes contributions from anyone! Unlike many other projects we are\nhappy to accept cosmetic contributions and small contributions, in addition to\nlarge feature requests and changes.\nBefore you contribute (either by opening an issue or filing a pull request),\nplease read the contribution guidelines.\n\nLicense\nhpack is made available under the MIT License. For more details, see the\nLICENSE file in the repository.\n\nAuthors\nhpack is maintained by Cory Benfield, with contributions from others. For\nmore details about the contributors, please see CONTRIBUTORS.rst.\n\n\n", "description": "HTTP/2 header encoding and decoding."}, {"name": "h11", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nh11\nFAQ\n\n\n\n\n\nREADME.rst\n\n\n\n\nh11\n\n\n\n\nThis is a little HTTP/1.1 library written from scratch in Python,\nheavily inspired by hyper-h2.\nIt's a \"bring-your-own-I/O\" library; h11 contains no IO code\nwhatsoever. This means you can hook h11 up to your favorite network\nAPI, and that could be anything you want: synchronous, threaded,\nasynchronous, or your own implementation of RFC 6214 -- h11 won't judge you.\n(Compare this to the current state of the art, where every time a new\nnetwork API comes along then someone\ngets to start over reimplementing the entire HTTP protocol from\nscratch.) Cory Benfield made an excellent blog post describing the\nbenefits of this approach, or if you like video\nthen here's his PyCon 2016 talk on the same theme.\nThis also means that h11 is not immediately useful out of the box:\nit's a toolkit for building programs that speak HTTP, not something\nthat could directly replace requests or twisted.web or\nwhatever. But h11 makes it much easier to implement something like\nrequests or twisted.web.\nAt a high level, working with h11 goes like this:\n\nFirst, create an h11.Connection object to track the state of a\nsingle HTTP/1.1 connection.\nWhen you read data off the network, pass it to\nconn.receive_data(...); you'll get back a list of objects\nrepresenting high-level HTTP \"events\".\nWhen you want to send a high-level HTTP event, create the\ncorresponding \"event\" object and pass it to conn.send(...);\nthis will give you back some bytes that you can then push out\nthrough the network.\n\nFor example, a client might instantiate and then send a\nh11.Request object, then zero or more h11.Data objects for the\nrequest body (e.g., if this is a POST), and then a\nh11.EndOfMessage to indicate the end of the message. Then the\nserver would then send back a h11.Response, some h11.Data, and\nits own h11.EndOfMessage. If either side violates the protocol,\nyou'll get a h11.ProtocolError exception.\nh11 is suitable for implementing both servers and clients, and has a\npleasantly symmetric API: the events you send as a client are exactly\nthe ones that you receive as a server and vice-versa.\nHere's an example of a tiny HTTP client\nIt also has a fine manual.\n\nFAQ\nWhyyyyy?\nI wanted to play with HTTP in Curio and Trio, which at the time didn't have any\nHTTP libraries. So I thought, no big deal, Python has, like, a dozen\ndifferent implementations of HTTP, surely I can find one that's\nreusable. I didn't find one, but I did find Cory's call-to-arms\nblog-post. So I figured, well, fine, if I have to implement HTTP from\nscratch, at least I can make sure no-one else has to ever again.\nShould I use it?\nMaybe. You should be aware that it's a very young project. But, it's\nfeature complete and has an exhaustive test-suite and complete docs,\nso the next step is for people to try using it and see how it goes\n:-). If you do then please let us know -- if nothing else we'll want\nto talk to you before making any incompatible changes!\nWhat are the features/limitations?\nRoughly speaking, it's trying to be a robust, complete, and non-hacky\nimplementation of the first \"chapter\" of the HTTP/1.1 spec: RFC 7230:\nHTTP/1.1 Message Syntax and Routing. That is, it mostly focuses on\nimplementing HTTP at the level of taking bytes on and off the wire,\nand the headers related to that, and tries to be anal about spec\nconformance. It doesn't know about higher-level concerns like URL\nrouting, conditional GETs, cross-origin cookie policies, or content\nnegotiation. But it does know how to take care of framing,\ncross-version differences in keep-alive handling, and the \"obsolete\nline folding\" rule, so you can focus your energies on the hard /\ninteresting parts for your application, and it tries to support the\nfull specification in the sense that any useful HTTP/1.1 conformant\napplication should be able to use h11.\nIt's pure Python, and has no dependencies outside of the standard\nlibrary.\nIt has a test suite with 100.0% coverage for both statements and\nbranches.\nCurrently it supports Python 3 (testing on 3.7-3.10) and PyPy 3.\nThe last Python 2-compatible version was h11 0.11.x.\n(Originally it had a Cython wrapper for http-parser and a beautiful nested state\nmachine implemented with yield from to postprocess the output. But\nI had to take these out -- the new parser needs fewer lines-of-code\nthan the old parser wrapper, is written in pure Python, uses no\nexotic language syntax, and has more features. It's sad, really; that\nold state machine was really slick. I just need a few sentences here\nto mourn that.)\nI don't know how fast it is. I haven't benchmarked or profiled it yet,\nso it's probably got a few pointless hot spots, and I've been trying\nto err on the side of simplicity and robustness instead of\nmicro-optimization. But at the architectural level I tried hard to\navoid fundamentally bad decisions, e.g., I believe that all the\nparsing algorithms remain linear-time even in the face of pathological\ninput like slowloris, and there are no byte-by-byte loops. (I also\nbelieve that it maintains bounded memory usage in the face of\narbitrary/pathological input.)\nThe whole library is ~800 lines-of-code. You can read and understand\nthe whole thing in less than an hour. Most of the energy invested in\nthis so far has been spent on trying to keep things simple by\nminimizing special-cases and ad hoc state manipulation; even though it\nis now quite small and simple, I'm still annoyed that I haven't\nfigured out how to make it even smaller and simpler. (Unfortunately,\nHTTP does not lend itself to simplicity.)\nThe API is ~feature complete and I don't expect the general outlines\nto change much, but you can't judge an API's ergonomics until you\nactually document and use it, so I'd expect some changes in the\ndetails.\nHow do I try it?\n$ pip install h11\n$ git clone git@github.com:python-hyper/h11\n$ cd h11/examples\n$ python basic-client.py\nand go from there.\nLicense?\nMIT\nCode of conduct?\nContributors are requested to follow our code of conduct in\nall project spaces.\n\n\n", "description": "Pure Python HTTP/1.1 protocol library."}, {"name": "h5py", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nHDF5 for Python\nWebsites\nInstallation\nReporting bugs\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\nHDF5 for Python\nh5py is a thin, pythonic wrapper around HDF5,\nwhich runs on Python 3 (3.8+).\n\nWebsites\n\nMain website: https://www.h5py.org\nSource code: https://github.com/h5py/h5py\nDiscussion forum: https://forum.hdfgroup.org/c/hdf-tools/h5py\n\n\nInstallation\nPre-built h5py can either be installed via your Python Distribution (e.g.\nContinuum Anaconda, Enthought Canopy) or from PyPI via pip.\nh5py is also distributed in many Linux Distributions (e.g. Ubuntu, Fedora),\nand in the macOS package managers Homebrew,\nMacports, or Fink.\nMore detailed installation instructions, including how to install h5py with\nMPI support, can be found at: https://docs.h5py.org/en/latest/build.html.\n\nReporting bugs\nOpen a bug at https://github.com/h5py/h5py/issues.  For general questions, ask\non the HDF forum (https://forum.hdfgroup.org/c/hdf-tools/h5py).\n\n\n", "description": "Pythonic interface to the HDF5 binary data format."}, {"name": "h5netcdf", "readme": "\n\n\n\nA Python interface for the netCDF4 file-format that reads and writes local or\nremote HDF5 files directly via h5py or h5pyd, without relying on the Unidata\nnetCDF library.\n\nWhy h5netcdf?\n\nIt has one less binary dependency (netCDF C). If you already have h5py\ninstalled, reading netCDF4 with h5netcdf may be much easier than installing\nnetCDF4-Python.\nWe\u2019ve seen occasional reports of better performance with h5py than\nnetCDF4-python, though in many cases performance is identical. For\none workflow, h5netcdf was reported to be almost 4x faster than\nnetCDF4-python.\nAnecdotally, HDF5 users seem to be unexcited about switching to netCDF \u2013\nhopefully this will convince them that netCDF4 is actually quite sane!\nFinally, side-stepping the netCDF C library (and Cython bindings to it)\ngives us an easier way to identify the source of performance issues and\nbugs in the netCDF libraries/specification.\n\n\n\nInstall\nEnsure you have a recent version of h5py installed (I recommend using conda or\nthe community effort conda-forge).\nAt least version 3.0 is required. Then:\n$ pip install h5netcdf\nOr if you are already using conda:\n$ conda install h5netcdf\nNote:\nFrom version 1.2. h5netcdf tries to align with a nep29-like support policy with regard\nto it\u2019s upstream dependencies.\n\n\nUsage\nh5netcdf has two APIs, a new API and a legacy API. Both interfaces currently\nreproduce most of the features of the netCDF interface, with the notable\nexception of support for operations that rename or delete existing objects.\nWe simply haven\u2019t gotten around to implementing this yet. Patches\nwould be very welcome.\n\nNew API\nThe new API supports direct hierarchical access of variables and groups. Its\ndesign is an adaptation of h5py to the netCDF data model. For example:\nimport h5netcdf\nimport numpy as np\n\nwith h5netcdf.File('mydata.nc', 'w') as f:\n    # set dimensions with a dictionary\n    f.dimensions = {'x': 5}\n    # and update them with a dict-like interface\n    # f.dimensions['x'] = 5\n    # f.dimensions.update({'x': 5})\n\n    v = f.create_variable('hello', ('x',), float)\n    v[:] = np.ones(5)\n\n    # you don't need to create groups first\n    # you also don't need to create dimensions first if you supply data\n    # with the new variable\n    v = f.create_variable('/grouped/data', ('y',), data=np.arange(10))\n\n    # access and modify attributes with a dict-like interface\n    v.attrs['foo'] = 'bar'\n\n    # you can access variables and groups directly using a hierarchical\n    # keys like h5py\n    print(f['/grouped/data'])\n\n    # add an unlimited dimension\n    f.dimensions['z'] = None\n    # explicitly resize a dimension and all variables using it\n    f.resize_dimension('z', 3)\nNotes:\n\nAutomatic resizing of unlimited dimensions with array indexing is not available.\nDimensions need to be manually resized with Group.resize_dimension(dimension, size).\nArrays are returned padded with fillvalue (taken from underlying hdf5 dataset) up to\ncurrent size of variable\u2019s dimensions. The behaviour is equivalent to netCDF4-python\u2019s\nDataset.set_auto_mask(False).\n\n\n\nLegacy API\nThe legacy API is designed for compatibility with netCDF4-python. To use it, import\nh5netcdf.legacyapi:\nimport h5netcdf.legacyapi as netCDF4\n# everything here would also work with this instead:\n# import netCDF4\nimport numpy as np\n\nwith netCDF4.Dataset('mydata.nc', 'w') as ds:\n    ds.createDimension('x', 5)\n    v = ds.createVariable('hello', float, ('x',))\n    v[:] = np.ones(5)\n\n    g = ds.createGroup('grouped')\n    g.createDimension('y', 10)\n    g.createVariable('data', 'i8', ('y',))\n    v = g['data']\n    v[:] = np.arange(10)\n    v.foo = 'bar'\n    print(ds.groups['grouped'].variables['data'])\nThe legacy API is designed to be easy to try-out for netCDF4-python users, but it is not an\nexact match. Here is an incomplete list of functionality we don\u2019t include:\n\nUtility functions chartostring, num2date, etc., that are not directly necessary\nfor writing netCDF files.\nh5netcdf variables do not support automatic masking or scaling (e.g., of values matching\nthe _FillValue attribute). We prefer to leave this functionality to client libraries\n(e.g., xarray), which can implement their exact desired scaling behavior. Nevertheless\narrays are returned padded with fillvalue (taken from underlying hdf5 dataset) up to\ncurrent size of variable\u2019s dimensions. The behaviour is equivalent to netCDF4-python\u2019s\nDataset.set_auto_mask(False).\n\n\n\nInvalid netCDF files\nh5py implements some features that do not (yet) result in valid netCDF files:\n\n\nData types:\n\nBooleans\nComplex values\nNon-string variable length types\nEnum types\nReference types\n\n\n\n\n\nArbitrary filters:\n\nScale-offset filters\n\n\n\n\n\nBy default [1], h5netcdf will not allow writing files using any of these features,\nas files with such features are not readable by other netCDF tools.\nHowever, these are still valid HDF5 files. If you don\u2019t care about netCDF\ncompatibility, you can use these features by setting invalid_netcdf=True\nwhen creating a file:\n# avoid the .nc extension for non-netcdf files\nf = h5netcdf.File('mydata.h5', invalid_netcdf=True)\n...\n\n# works with the legacy API, too, though compression options are not exposed\nds = h5netcdf.legacyapi.Dataset('mydata.h5', invalid_netcdf=True)\n...\nIn such cases the _NCProperties attribute will not be saved to the file or be removed\nfrom an existing file. A warning will be issued if the file has .nc-extension.\nFootnotes\n\n\n[1]\nh5netcdf we will raise h5netcdf.CompatibilityError.\n\n\n\n\nDecoding variable length strings\nh5py 3.0 introduced new behavior for handling variable length string.\nInstead of being automatically decoded with UTF-8 into NumPy arrays of str,\nthey are required as arrays of bytes.\nThe legacy API preserves the old behavior of h5py (which matches netCDF4),\nand automatically decodes strings.\nThe new API matches h5py behavior. Explicitly set decode_vlen_strings=True\nin the h5netcdf.File constructor to opt-in to automatic decoding.\n\n\nDatasets with missing dimension scales\nBy default [2] h5netcdf raises a ValueError if variables with no dimension\nscale associated with one of their axes are accessed.\nYou can set phony_dims='sort' when opening a file to let h5netcdf invent\nphony dimensions according to netCDF behaviour.\n# mimic netCDF-behaviour for non-netcdf files\nf = h5netcdf.File('mydata.h5', mode='r', phony_dims='sort')\n...\nNote, that this iterates once over the whole group-hierarchy. This has affects\non performance in case you rely on laziness of group access.\nYou can set phony_dims='access' instead to defer phony dimension creation\nto group access time. The created phony dimension naming will differ from\nnetCDF behaviour.\nf = h5netcdf.File('mydata.h5', mode='r', phony_dims='access')\n...\nFootnotes\n\n\n[2]\nKeyword default setting phony_dims=None for backwards compatibility.\n\n\n\n\nTrack Order\nAs of h5netcdf 1.1.0, if h5py 3.7.0 or greater is detected, the track_order\nparameter is set to True enabling order tracking for newly created\nnetCDF4 files. This helps ensure that files created with the h5netcdf library\ncan be modified by the netCDF4-c and netCDF4-python implementation used in\nother software stacks. Since this change should be transparent to most users,\nit was made without deprecation.\nSince track_order is set at creation time, any dataset that was created with\ntrack_order=False (h5netcdf version 1.0.2 and older except for 0.13.0) will\ncontinue to opened with order tracker disabled.\nThe following describes the behavior of h5netcdf with respect to order tracking\nfor a few key versions:\n\nVersion 0.12.0 and earlier, the track_order parameter`order was missing\nand thus order tracking was implicitely set to False.\nVersion 0.13.0 enabled order tracking by setting the parameter\ntrack_order to True by default without deprecation.\nVersions 0.13.1 to 1.0.2 set track_order to False due to a bug in a\ncore dependency of h5netcdf, h5py upstream bug which was resolved in h5py\n3.7.0 with the help of the h5netcdf team.\nIn version 1.1.0, if h5py 3.7.0 or above is detected, the track_order\nparameter is set to True by default.\n\n\n\n\nChangelog\nChangelog\n\n\nLicense\n3-clause BSD\n\n", "description": "Read and write netCDF files via h5py."}, {"name": "h2", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nh2: HTTP/2 Protocol Stack\nDocumentation\nContributing\nLicense\nAuthors\n\n\n\n\n\nREADME.rst\n\n\n\n\nh2: HTTP/2 Protocol Stack\n\n\n\n\n\n\n\n\nThis repository contains a pure-Python implementation of a HTTP/2 protocol\nstack. It's written from the ground up to be embeddable in whatever program you\nchoose to use, ensuring that you can speak HTTP/2 regardless of your\nprogramming paradigm.\nYou use it like this:\nimport h2.connection\nimport h2.config\n\nconfig = h2.config.H2Configuration()\nconn = h2.connection.H2Connection(config=config)\nconn.send_headers(stream_id=stream_id, headers=headers)\nconn.send_data(stream_id, data)\nsocket.sendall(conn.data_to_send())\nevents = conn.receive_data(socket_data)\nThis repository does not provide a parsing layer, a network layer, or any rules\nabout concurrency. Instead, it's a purely in-memory solution, defined in terms\nof data actions and HTTP/2 frames. This is one building block of a full Python\nHTTP implementation.\nTo install it, just run:\n$ python -m pip install h2\n\nDocumentation\nDocumentation is available at https://h2.readthedocs.io .\n\nContributing\nh2 welcomes contributions from anyone! Unlike many other projects we\nare happy to accept cosmetic contributions and small contributions, in addition\nto large feature requests and changes.\nBefore you contribute (either by opening an issue or filing a pull request),\nplease read the contribution guidelines.\n\nLicense\nh2 is made available under the MIT License. For more details, see the\nLICENSE file in the repository.\n\nAuthors\nh2 was authored by Cory Benfield and is maintained\nby the members of python-hyper.\n\n\n", "description": "Pure-Python HTTP/2 protocol stack implementation."}, {"name": "gTTS", "readme": "\ngTTS\ngTTS (Google Text-to-Speech), a Python library and CLI tool to interface with Google Translate's text-to-speech API.\nWrite spoken mp3 data to a file, a file-like object (bytestring) for further audio manipulation, or stdout.\nhttp://gtts.readthedocs.org/\n\n\n\n\n\n\n\nFeatures\n\nCustomizable speech-specific sentence tokenizer that allows for unlimited lengths of text to be read, all while keeping proper intonation, abbreviations, decimals and more;\nCustomizable text pre-processors which can, for example, provide pronunciation corrections;\n\nInstallation\n$ pip install gTTS\n\nQuickstart\nCommand Line:\n$ gtts-cli 'hello' --output hello.mp3\n\nModule:\n>>> from gtts import gTTS\n>>> tts = gTTS('hello')\n>>> tts.save('hello.mp3')\n\nSee http://gtts.readthedocs.org/ for documentation and examples.\nDisclaimer\nThis project is not affiliated with Google or Google Cloud. Breaking upstream changes can occur without notice. This project is leveraging the undocumented Google Translate speech functionality and is different from Google Cloud Text-to-Speech.\nProject\n\nQuestions & community\nChangelog\nContributing\n\nLicence\nThe MIT License (MIT) Copyright \u00a9 2014-2023 Pierre Nicolas Durette & Contributors\n", "description": "Python library and CLI tool to interface with Google Translate's text-to-speech API."}, {"name": "graphviz", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nGraphviz\nLinks\nInstallation\nQuickstart\nSee also\nLicense\nDevelopment\n\n\n\n\n\nREADME.rst\n\n\n\n\nGraphviz\n\n \n \n \n \n\n \n  \n\nThis package facilitates the creation and rendering of graph descriptions in\nthe DOT language of the Graphviz graph drawing software (upstream repo)\nfrom Python.\nCreate a graph object, assemble the graph by adding nodes and edges, and\nretrieve its DOT source code string. Save the source code to a file and render\nit with the Graphviz installation of your system.\nUse the view option/method to directly inspect the resulting (PDF, PNG,\nSVG, etc.) file with its default application. Graphs can also be rendered\nand displayed within Jupyter notebooks (formerly known as\nIPython notebooks,\nexample, nbviewer)\nas well as the Jupyter QtConsole.\n\nLinks\n\nGitHub: https://github.com/xflr6/graphviz\nPyPI: https://pypi.org/project/graphviz/\nDocumentation: https://graphviz.readthedocs.io\nChangelog: https://graphviz.readthedocs.io/en/latest/changelog.html\nIssue Tracker: https://github.com/xflr6/graphviz/issues\nDownload: https://pypi.org/project/graphviz/#files\n\n\nInstallation\nThis package runs under Python 3.8+, use pip to install:\n$ pip install graphviz\nTo render the generated DOT source code, you also need to install Graphviz\n(download page,\narchived versions,\ninstallation procedure for Windows).\nMake sure that the directory containing the dot executable is on your\nsystems' PATH\n(sometimes done by the installer;\nsetting PATH\non Linux,\nMac,\nand Windows).\nAnaconda: see the conda-forge package\nconda-forge/python-graphviz\n(feedstock),\nwhich should automatically conda install\nconda-forge/graphviz\n(feedstock) as dependency.\n\nQuickstart\nCreate a graph object:\n>>> import graphviz  # doctest: +NO_EXE\n>>> dot = graphviz.Digraph(comment='The Round Table')\n>>> dot  #doctest: +ELLIPSIS\n<graphviz.graphs.Digraph object at 0x...>\nAdd nodes and edges:\n>>> dot.node('A', 'King Arthur')  # doctest: +NO_EXE\n>>> dot.node('B', 'Sir Bedevere the Wise')\n>>> dot.node('L', 'Sir Lancelot the Brave')\n\n>>> dot.edges(['AB', 'AL'])\n>>> dot.edge('B', 'L', constraint='false')\nCheck the generated source code:\n>>> print(dot.source)  # doctest: +NORMALIZE_WHITESPACE +NO_EXE\n// The Round Table\ndigraph {\n    A [label=\"King Arthur\"]\n    B [label=\"Sir Bedevere the Wise\"]\n    L [label=\"Sir Lancelot the Brave\"]\n    A -> B\n    A -> L\n    B -> L [constraint=false]\n}\nSave and render the source code:\n>>> doctest_mark_exe()\n\n>>> dot.render('doctest-output/round-table.gv').replace('\\\\', '/')\n'doctest-output/round-table.gv.pdf'\nSave and render and view the result:\n>>> doctest_mark_exe()\n\n>>> dot.render('doctest-output/round-table.gv', view=True)  # doctest: +SKIP\n'doctest-output/round-table.gv.pdf'\n\nCaveat:\nBackslash-escapes and strings of the form <...>\nhave a special meaning in the DOT language.\nIf you need to render arbitrary strings (e.g. from user input),\ncheck the details in the user guide.\n\nSee also\n\npygraphviz \u2013 full-blown interface wrapping the Graphviz C library with SWIG\ngraphviz-python \u2013 official Python bindings\n(documentation)\npydot \u2013 stable pure-Python approach, requires pyparsing\n\n\nLicense\nThis package is distributed under the MIT license.\n\nDevelopment\n\nDevelopment documentation: https://graphviz.readthedocs.io/en/latest/development.html\nRelease process: https://graphviz.readthedocs.io/en/latest/release_process.html\n\n\n\n", "description": "Python interface to Graphviz graph visualization library."}, {"name": "gradio", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nGradio: Build Machine Learning Web Apps \u2014 in Python\nQuickstart\nWhat Does Gradio Do?\nHello, World\nThe Interface Class\nComponents Attributes\nMultiple Input and Output Components\nAn Image Example\nChatbots\nBlocks: More Flexibility and Control\nHello, Blocks\nMore Complexity\nOpen Source Stack\nLicense\nCitation\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\n\nBuild & share delightful machine learning apps easily\n\n\n\n\n\n\nWebsite\n| Documentation\n| Guides\n| Getting Started\n| Examples\n| \u4e2d\u6587\n\nGradio: Build Machine Learning Web Apps \u2014 in Python\nGradio is an open-source Python library that is used to build machine learning and data science demos and web applications.\nWith Gradio, you can quickly create a beautiful user interface around your machine learning models or data science workflow and let people \"try it out\" by dragging-and-dropping in their own images,\npasting text, recording their own voice, and interacting with your demo, all through the browser.\n\nGradio is useful for:\n\n\nDemoing your machine learning models for clients/collaborators/users/students.\n\n\nDeploying your models quickly with automatic shareable links and getting feedback on model performance.\n\n\nDebugging your model interactively during development using built-in manipulation and interpretation tools.\n\n\nQuickstart\nPrerequisite: Gradio requires Python 3.8 or higher, that's all!\nWhat Does Gradio Do?\nOne of the best ways to share your machine learning model, API, or data science workflow with others is to create an interactive app that allows your users or colleagues to try out the demo in their browsers.\nGradio allows you to build demos and share them, all in Python. And usually in just a few lines of code! So let's get started.\nHello, World\nTo get Gradio running with a simple \"Hello, World\" example, follow these three steps:\n1. Install Gradio using pip:\npip install gradio\n2. Run the code below as a Python script or in a Jupyter Notebook (or Google Colab):\nimport gradio as gr\n\ndef greet(name):\n    return \"Hello \" + name + \"!\"\n\ndemo = gr.Interface(fn=greet, inputs=\"text\", outputs=\"text\")\n\ndemo.launch()\nWe shorten the imported name to gr for better readability of code using Gradio. This is a widely adopted convention that you should follow so that anyone working with your code can easily understand it.\n3. The demo below will appear automatically within the Jupyter Notebook, or pop in a browser on http://localhost:7860 if running from a script:\n\nWhen developing locally, if you want to run the code as a Python script, you can use the Gradio CLI to launch the application in reload mode, which will provide seamless and fast development. Learn more about reloading in the Auto-Reloading Guide.\ngradio app.py\nNote: you can also do python app.py, but it won't provide the automatic reload mechanism.\nThe Interface Class\nYou'll notice that in order to make the demo, we created a gr.Interface. This Interface class can wrap any Python function with a user interface. In the example above, we saw a simple text-based function, but the function could be anything from music generator to a tax calculator to the prediction function of a pretrained machine learning model.\nThe core Interface class is initialized with three required parameters:\n\nfn: the function to wrap a UI around\ninputs: which component(s) to use for the input (e.g. \"text\", \"image\" or \"audio\")\noutputs: which component(s) to use for the output (e.g. \"text\", \"image\" or \"label\")\n\nLet's take a closer look at these components used to provide input and output.\nComponents Attributes\nWe saw some simple Textbox components in the previous examples, but what if you want to change how the UI components look or behave?\nLet's say you want to customize the input text field \u2014 for example, you wanted it to be larger and have a text placeholder. If we use the actual class for Textbox instead of using the string shortcut, you have access to much more customizability through component attributes.\nimport gradio as gr\n\ndef greet(name):\n    return \"Hello \" + name + \"!\"\n\ndemo = gr.Interface(\n    fn=greet,\n    inputs=gr.Textbox(lines=2, placeholder=\"Name Here...\"),\n    outputs=\"text\",\n)\ndemo.launch()\n\nMultiple Input and Output Components\nSuppose you had a more complex function, with multiple inputs and outputs. In the example below, we define a function that takes a string, boolean, and number, and returns a string and number. Take a look how you pass a list of input and output components.\nimport gradio as gr\n\ndef greet(name, is_morning, temperature):\n    salutation = \"Good morning\" if is_morning else \"Good evening\"\n    greeting = f\"{salutation} {name}. It is {temperature} degrees today\"\n    celsius = (temperature - 32) * 5 / 9\n    return greeting, round(celsius, 2)\n\ndemo = gr.Interface(\n    fn=greet,\n    inputs=[\"text\", \"checkbox\", gr.Slider(0, 100)],\n    outputs=[\"text\", \"number\"],\n)\ndemo.launch()\n\nYou simply wrap the components in a list. Each component in the inputs list corresponds to one of the parameters of the function, in order. Each component in the outputs list corresponds to one of the values returned by the function, again in order.\nAn Image Example\nGradio supports many types of components, such as Image, DataFrame, Video, or Label. Let's try an image-to-image function to get a feel for these!\nimport numpy as np\nimport gradio as gr\n\ndef sepia(input_img):\n    sepia_filter = np.array([\n        [0.393, 0.769, 0.189],\n        [0.349, 0.686, 0.168],\n        [0.272, 0.534, 0.131]\n    ])\n    sepia_img = input_img.dot(sepia_filter.T)\n    sepia_img /= sepia_img.max()\n    return sepia_img\n\ndemo = gr.Interface(sepia, gr.Image(shape=(200, 200)), \"image\")\ndemo.launch()\n\nWhen using the Image component as input, your function will receive a NumPy array with the shape (height, width, 3), where the last dimension represents the RGB values. We'll return an image as well in the form of a NumPy array.\nYou can also set the datatype used by the component with the type= keyword argument. For example, if you wanted your function to take a file path to an image instead of a NumPy array, the input Image component could be written as:\ngr.Image(type=\"filepath\", shape=...)\nAlso note that our input Image component comes with an edit button \ud83d\udd89, which allows for cropping and zooming into images. Manipulating images in this way can help reveal biases or hidden flaws in a machine learning model!\nYou can read more about the many components and how to use them in the Gradio docs.\nChatbots\nGradio includes a high-level class, gr.ChatInterface, which is similar to gr.Interface, but is specifically designed for chatbot UIs. The gr.ChatInterface class also wraps a function but this function must have a specific signature. The function should take two arguments: message and then history (the arguments can be named anything, but must be in this order)\n\nmessage: a str representing the user's input\nhistory: a list of list representing the conversations up until that point. Each inner list consists of two str representing a pair: [user input, bot response].\n\nYour function should return a single string response, which is the bot's response to the particular user input message.\nOther than that, gr.ChatInterface has no required parameters (though several are available for customization of the UI).\nHere's a toy example:\nimport random\nimport gradio as gr\n\ndef random_response(message, history):\n    return random.choice([\"Yes\", \"No\"])\n\ndemo = gr.ChatInterface(random_response)\n\ndemo.launch()\n\nYou can read more about gr.ChatInterface here.\nBlocks: More Flexibility and Control\nGradio offers two approaches to build apps:\n1. Interface and ChatInterface, which provide a high-level abstraction for creating demos that we've been discussing so far.\n2. Blocks, a low-level API for designing web apps with more flexible layouts and data flows. Blocks allows you to do things like feature multiple data flows and demos, control where components appear on the page, handle complex data flows (e.g. outputs can serve as inputs to other functions), and update properties/visibility of components based on user interaction \u2014 still all in Python. If this customizability is what you need, try Blocks instead!\nHello, Blocks\nLet's take a look at a simple example. Note how the API here differs from Interface.\nimport gradio as gr\n\ndef greet(name):\n    return \"Hello \" + name + \"!\"\n\nwith gr.Blocks() as demo:\n    name = gr.Textbox(label=\"Name\")\n    output = gr.Textbox(label=\"Output Box\")\n    greet_btn = gr.Button(\"Greet\")\n    greet_btn.click(fn=greet, inputs=name, outputs=output, api_name=\"greet\")\n\n\ndemo.launch()\n\nThings to note:\n\nBlocks are made with a with clause, and any component created inside this clause is automatically added to the app.\nComponents appear vertically in the app in the order they are created. (Later we will cover customizing layouts!)\nA Button was created, and then a click event-listener was added to this button. The API for this should look familiar! Like an Interface, the click method takes a Python function, input components, and output components.\n\nMore Complexity\nHere's an app to give you a taste of what's possible with Blocks:\nimport numpy as np\nimport gradio as gr\n\n\ndef flip_text(x):\n    return x[::-1]\n\n\ndef flip_image(x):\n    return np.fliplr(x)\n\n\nwith gr.Blocks() as demo:\n    gr.Markdown(\"Flip text or image files using this demo.\")\n    with gr.Tab(\"Flip Text\"):\n        text_input = gr.Textbox()\n        text_output = gr.Textbox()\n        text_button = gr.Button(\"Flip\")\n    with gr.Tab(\"Flip Image\"):\n        with gr.Row():\n            image_input = gr.Image()\n            image_output = gr.Image()\n        image_button = gr.Button(\"Flip\")\n\n    with gr.Accordion(\"Open for More!\"):\n        gr.Markdown(\"Look at me...\")\n\n    text_button.click(flip_text, inputs=text_input, outputs=text_output)\n    image_button.click(flip_image, inputs=image_input, outputs=image_output)\n\ndemo.launch()\n\nA lot more going on here! We'll cover how to create complex Blocks apps like this in the building with blocks section for you.\nCongrats, you're now familiar with the basics of Gradio! \ud83e\udd73 Go to our next guide to learn more about the key features of Gradio.\nOpen Source Stack\nGradio is built with many wonderful open-source libraries, please support them as well!\n\n\n\n\n\n\n\n\n\n\nLicense\nGradio is licensed under the Apache License 2.0 found in the LICENSE file in the root directory of this repository.\nCitation\nAlso check out the paper Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild, ICML HILL 2019, and please cite it if you use Gradio in your work.\n@article{abid2019gradio,\n  title = {Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild},\n  author = {Abid, Abubakar and Abdalla, Ali and Abid, Ali and Khan, Dawood and Alfozan, Abdulrahman and Zou, James},\n  journal = {arXiv preprint arXiv:1906.02569},\n  year = {2019},\n}\n\n\n\n", "description": "Library for creating ML web interfaces and demos."}, {"name": "geopy", "readme": "\n\n\n\n\n\n\n\n\n\n\n\ngeopy\nInstallation\nGeocoding\nMeasuring Distance\nDocumentation\n\n\n\n\n\nREADME.rst\n\n\n\n\ngeopy\n\n\n\ngeopy is a Python client for several popular geocoding web\nservices.\ngeopy makes it easy for Python developers to locate the coordinates of\naddresses, cities, countries, and landmarks across the globe using\nthird-party geocoders and other data sources.\ngeopy includes geocoder classes for the OpenStreetMap Nominatim,\nGoogle Geocoding API (V3), and many other geocoding services.\nThe full list is available on the Geocoders doc section.\nGeocoder classes are located in geopy.geocoders.\ngeopy is tested against CPython (versions 3.7, 3.8, 3.9, 3.10, 3.11, 3.12)\nand PyPy3. geopy 1.x line also supported CPython 2.7, 3.4 and PyPy2.\n\u00a9 geopy contributors 2006-2018 (see AUTHORS) under the MIT\nLicense.\n\nInstallation\nInstall using pip with:\npip install geopy\n\nOr, download a wheel or source archive from\nPyPI.\n\nGeocoding\nTo geolocate a query to an address and coordinates:\n>>> from geopy.geocoders import Nominatim\n>>> geolocator = Nominatim(user_agent=\"specify_your_app_name_here\")\n>>> location = geolocator.geocode(\"175 5th Avenue NYC\")\n>>> print(location.address)\nFlatiron Building, 175, 5th Avenue, Flatiron, New York, NYC, New York, ...\n>>> print((location.latitude, location.longitude))\n(40.7410861, -73.9896297241625)\n>>> print(location.raw)\n{'place_id': '9167009604', 'type': 'attraction', ...}\nTo find the address corresponding to a set of coordinates:\n>>> from geopy.geocoders import Nominatim\n>>> geolocator = Nominatim(user_agent=\"specify_your_app_name_here\")\n>>> location = geolocator.reverse(\"52.509669, 13.376294\")\n>>> print(location.address)\nPotsdamer Platz, Mitte, Berlin, 10117, Deutschland, European Union\n>>> print((location.latitude, location.longitude))\n(52.5094982, 13.3765983)\n>>> print(location.raw)\n{'place_id': '654513', 'osm_type': 'node', ...}\n\nMeasuring Distance\nGeopy can calculate geodesic distance between two points using the\ngeodesic distance or the\ngreat-circle distance,\nwith a default of the geodesic distance available as the function\ngeopy.distance.distance.\nHere's an example usage of the geodesic distance, taking pair\nof (lat, lon) tuples:\n>>> from geopy.distance import geodesic\n>>> newport_ri = (41.49008, -71.312796)\n>>> cleveland_oh = (41.499498, -81.695391)\n>>> print(geodesic(newport_ri, cleveland_oh).miles)\n538.390445368\nUsing great-circle distance, also taking pair of (lat, lon) tuples:\n>>> from geopy.distance import great_circle\n>>> newport_ri = (41.49008, -71.312796)\n>>> cleveland_oh = (41.499498, -81.695391)\n>>> print(great_circle(newport_ri, cleveland_oh).miles)\n536.997990696\n\nDocumentation\nMore documentation and examples can be found at\nRead the Docs.\n\n\n", "description": "Python Geocoding Toolbox."}, {"name": "geopandas", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nGeoPandas\nIntroduction\nInstall\nGet in touch\nExamples\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\n\n\n\n\n\nGeoPandas\nPython tools for geographic data\nIntroduction\nGeoPandas is a project to add support for geographic data to\npandas objects.  It currently implements\nGeoSeries and GeoDataFrame types which are subclasses of\npandas.Series and pandas.DataFrame respectively.  GeoPandas\nobjects can act on shapely\ngeometry objects and perform geometric operations.\nGeoPandas geometry operations are cartesian.  The coordinate reference\nsystem (crs) can be stored as an attribute on an object, and is\nautomatically set when loading from a file.  Objects may be\ntransformed to new coordinate systems with the to_crs() method.\nThere is currently no enforcement of like coordinates for operations,\nbut that may change in the future.\nDocumentation is available at geopandas.org\n(current release) and\nRead the Docs\n(release and development versions).\nThe GeoPandas project uses an open governance model\nand is fiscally sponsored by NumFOCUS. Consider making\na tax-deductible donation to help the project\npay for developer time, professional services, travel, workshops, and a variety of other needs.\n\n\n\n\n\n\nInstall\nSee the installation docs\nfor all details. GeoPandas depends on the following packages:\n\npandas\nshapely\nfiona\npyproj\npackaging\n\nFurther, matplotlib is an optional dependency, required\nfor plotting, and rtree is an optional\ndependency, required for spatial joins. rtree requires the C library libspatialindex.\nThose packages depend on several low-level libraries for geospatial analysis, which can be a challenge to install. Therefore, we recommend to install GeoPandas using the conda package manager. See the installation docs for more details.\nGet in touch\n\nAsk usage questions (\"How do I?\") on StackOverflow or GIS StackExchange.\nGet involved in discussions on GitHub\nReport bugs, suggest features or view the source code on GitHub.\nFor a quick question about a bug report or feature request, or Pull Request, head over to the gitter channel.\nFor less well defined questions or ideas, or to announce other projects of interest to GeoPandas users, ... use the mailing list.\n\nExamples\n>>> import geopandas\n>>> from shapely.geometry import Polygon\n>>> p1 = Polygon([(0, 0), (1, 0), (1, 1)])\n>>> p2 = Polygon([(0, 0), (1, 0), (1, 1), (0, 1)])\n>>> p3 = Polygon([(2, 0), (3, 0), (3, 1), (2, 1)])\n>>> g = geopandas.GeoSeries([p1, p2, p3])\n>>> g\n0         POLYGON ((0 0, 1 0, 1 1, 0 0))\n1    POLYGON ((0 0, 1 0, 1 1, 0 1, 0 0))\n2    POLYGON ((2 0, 3 0, 3 1, 2 1, 2 0))\ndtype: geometry\n\n\nSome geographic operations return normal pandas objects.  The area property of a GeoSeries will return a pandas.Series containing the area of each item in the GeoSeries:\n>>> print(g.area)\n0    0.5\n1    1.0\n2    1.0\ndtype: float64\n\nOther operations return GeoPandas objects:\n>>> g.buffer(0.5)\n0    POLYGON ((-0.3535533905932737 0.35355339059327...\n1    POLYGON ((-0.5 0, -0.5 1, -0.4975923633360985 ...\n2    POLYGON ((1.5 0, 1.5 1, 1.502407636663901 1.04...\ndtype: geometry\n\n\nGeoPandas objects also know how to plot themselves. GeoPandas uses\nmatplotlib for plotting. To generate a plot of a\nGeoSeries, use:\n>>> g.plot()\n\nGeoPandas also implements alternate constructors that can read any data format recognized by fiona. To read a zip file containing an ESRI shapefile with the boroughs boundaries of New York City (the example can be fetched using the geodatasets package):\n>>> import geodatasets\n>>> nybb_path = geodatasets.get_path('nybb')\n>>> boros = geopandas.read_file(nybb_path)\n>>> boros.set_index('BoroCode', inplace=True)\n>>> boros.sort_index(inplace=True)\n>>> boros\n               BoroName     Shape_Leng    Shape_Area  \\\nBoroCode\n1             Manhattan  359299.096471  6.364715e+08\n2                 Bronx  464392.991824  1.186925e+09\n3              Brooklyn  741080.523166  1.937479e+09\n4                Queens  896344.047763  3.045213e+09\n5         Staten Island  330470.010332  1.623820e+09\n\n                                                   geometry\nBoroCode\n1         MULTIPOLYGON (((981219.0557861328 188655.31579...\n2         MULTIPOLYGON (((1012821.805786133 229228.26458...\n3         MULTIPOLYGON (((1021176.479003906 151374.79699...\n4         MULTIPOLYGON (((1029606.076599121 156073.81420...\n5         MULTIPOLYGON (((970217.0223999023 145643.33221...\n\n\n>>> boros['geometry'].convex_hull\nBoroCode\n1    POLYGON ((977855.4451904297 188082.3223876953,...\n2    POLYGON ((1017949.977600098 225426.8845825195,...\n3    POLYGON ((988872.8212280273 146772.0317993164,...\n4    POLYGON ((1000721.531799316 136681.776184082, ...\n5    POLYGON ((915517.6877458114 120121.8812543372,...\ndtype: geometry\n\n\n\n\n", "description": "Geographic pandas extensions.", "category": "Geospatial"}, {"name": "geographiclib", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n", "description": "Library for geographic manipulations."}, {"name": "gensim", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ngensim \u2013 Topic Modelling in Python\n\u26a0\ufe0f Want to help out? Sponsor Gensim \u2764\ufe0f\n\u26a0\ufe0f Gensim is in stable maintenance mode: we are not accepting new features, but bug and documentation fixes are still welcome! \u26a0\ufe0f\nFeatures\nInstallation\nHow come gensim is so fast and memory efficient? Isn\u2019t it pure Python, and isn\u2019t Python slow and greedy?\nDocumentation\nSupport\nAdopters\nCiting gensim\n\n\n\n\n\nREADME.md\n\n\n\n\ngensim \u2013 Topic Modelling in Python\n\n\n\n\n\n\nGensim is a Python library for topic modelling, document indexing\nand similarity retrieval with large corpora. Target audience is the\nnatural language processing (NLP) and information retrieval (IR)\ncommunity.\n\u26a0\ufe0f Want to help out? Sponsor Gensim \u2764\ufe0f\n\u26a0\ufe0f Gensim is in stable maintenance mode: we are not accepting new features, but bug and documentation fixes are still welcome! \u26a0\ufe0f\nFeatures\n\nAll algorithms are memory-independent w.r.t. the corpus size\n(can process input larger than RAM, streamed, out-of-core),\nIntuitive interfaces\n\neasy to plug in your own input corpus/datastream (trivial\nstreaming API)\neasy to extend with other Vector Space algorithms (trivial\ntransformation API)\n\n\nEfficient multicore implementations of popular algorithms, such as\nonline Latent Semantic Analysis (LSA/LSI/SVD), Latent\nDirichlet Allocation (LDA), Random Projections (RP),\nHierarchical Dirichlet Process (HDP) or word2vec deep\nlearning.\nDistributed computing: can run Latent Semantic Analysis and\nLatent Dirichlet Allocation on a cluster of computers.\nExtensive documentation and Jupyter Notebook tutorials.\n\nIf this feature list left you scratching your head, you can first read\nmore about the Vector Space Model and unsupervised document analysis\non Wikipedia.\nInstallation\nThis software depends on NumPy and Scipy, two Python packages for\nscientific computing. You must have them installed prior to installing\ngensim.\nIt is also recommended you install a fast BLAS library before installing\nNumPy. This is optional, but using an optimized BLAS such as MKL, ATLAS or\nOpenBLAS is known to improve performance by as much as an order of\nmagnitude. On OSX, NumPy picks up its vecLib BLAS automatically,\nso you don\u2019t need to do anything special.\nInstall the latest version of gensim:\n    pip install --upgrade gensim\nOr, if you have instead downloaded and unzipped the source tar.gz\npackage:\n    python setup.py install\nFor alternative modes of installation, see the documentation.\nGensim is being continuously tested under all\nsupported Python versions.\nSupport for Python 2.7 was dropped in gensim 4.0.0 \u2013 install gensim 3.8.3 if you must use Python 2.7.\nHow come gensim is so fast and memory efficient? Isn\u2019t it pure Python, and isn\u2019t Python slow and greedy?\nMany scientific algorithms can be expressed in terms of large matrix\noperations (see the BLAS note above). Gensim taps into these low-level\nBLAS libraries, by means of its dependency on NumPy. So while\ngensim-the-top-level-code is pure Python, it actually executes highly\noptimized Fortran/C under the hood, including multithreading (if your\nBLAS is so configured).\nMemory-wise, gensim makes heavy use of Python\u2019s built-in generators and\niterators for streamed data processing. Memory efficiency was one of\ngensim\u2019s design goals, and is a central feature of gensim, rather than\nsomething bolted on as an afterthought.\nDocumentation\n\nQuickStart\nTutorials\nOfficial API Documentation\n\nSupport\nFor commercial support, please see Gensim sponsorship.\nAsk open-ended questions on the public Gensim Mailing List.\nRaise bugs on Github but please make sure you follow the issue template. Issues that are not bugs or fail to provide the requested details will be closed without inspection.\n\nAdopters\n\n\n\nCompany\nLogo\nIndustry\nUse of Gensim\n\n\n\n\nRARE Technologies\n\nML & NLP consulting\nCreators of Gensim \u2013\u00a0this is us!\n\n\nAmazon\n\nRetail\nDocument similarity.\n\n\nNational Institutes of Health\n\nHealth\nProcessing grants and publications with word2vec.\n\n\nCisco Security\n\nSecurity\nLarge-scale fraud detection.\n\n\nMindseye\n\nLegal\nSimilarities in legal documents.\n\n\nChannel 4\n\nMedia\nRecommendation engine.\n\n\nTalentpair\n\nHR\nCandidate matching in high-touch recruiting.\n\n\nJuju\n\nHR\nProvide non-obvious related job suggestions.\n\n\nTailwind\n\nMedia\nPost interesting and relevant content to Pinterest.\n\n\nIssuu\n\nMedia\nGensim's LDA module lies at the very core of the analysis we perform on each uploaded publication to figure out what it's all about.\n\n\nSearch Metrics\n\nContent Marketing\nGensim word2vec used for entity disambiguation in Search Engine Optimisation.\n\n\n12K Research\n\nMedia\nDocument similarity analysis on media articles.\n\n\nStillwater Supercomputing\n\nHardware\nDocument comprehension and association with word2vec.\n\n\nSiteGround\n\nWeb hosting\nAn ensemble search engine which uses different embeddings models and similarities, including word2vec, WMD, and LDA.\n\n\nCapital One\n\nFinance\nTopic modeling for customer complaints exploration.\n\n\n\n\nCiting gensim\nWhen citing gensim in academic papers and theses, please use this\nBibTeX entry:\n@inproceedings{rehurek_lrec,\n      title = {{Software Framework for Topic Modelling with Large Corpora}},\n      author = {Radim {\\v R}eh{\\r u}{\\v r}ek and Petr Sojka},\n      booktitle = {{Proceedings of the LREC 2010 Workshop on New\n           Challenges for NLP Frameworks}},\n      pages = {45--50},\n      year = 2010,\n      month = May,\n      day = 22,\n      publisher = {ELRA},\n      address = {Valletta, Malta},\n      note={\\url{http://is.muni.cz/publication/884893/en}},\n      language={English}\n}\n\n\n\n", "description": "Topic Modelling in Python."}, {"name": "fuzzywuzzy", "readme": "\n\n\n\nREADME.md\n\n\n\n\nThis project has been renamed and moved to https://github.com/seatgeek/thefuzz\nTheFuzz version 0.19.0 correlates with this project's 0.18.0 version with thefuzz replacing all instances of this project's name.\nPRs and issues here will need to be resubmitted to TheFuzz\n\n\n", "description": "Fuzzy string matching in Python."}, {"name": "future", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n", "description": "Compatibility layer between Python 2 and Python 3."}, {"name": "frozenlist", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nfrozenlist\nIntroduction\nInstallation\nDocumentation\nCommunication channels\nRequirements\nLicense\nSource code\n\n\n\n\n\nREADME.rst\n\n\n\n\nfrozenlist\n\n\n\n\n\n\n\n\n\nIntroduction\nfrozenlist.FrozenList is a list-like structure which implements\ncollections.abc.MutableSequence. The list is mutable until FrozenList.freeze\nis called, after which list modifications raise RuntimeError:\n>>> from frozenlist import FrozenList\n>>> fl = FrozenList([17, 42])\n>>> fl.append('spam')\n>>> fl.append('Vikings')\n>>> fl\n<FrozenList(frozen=False, [17, 42, 'spam', 'Vikings'])>\n>>> fl.freeze()\n>>> fl\n<FrozenList(frozen=True, [17, 42, 'spam', 'Vikings'])>\n>>> fl.frozen\nTrue\n>>> fl.append(\"Monty\")\nTraceback (most recent call last):\n  File \"<stdin>\", line 1, in <module>\n  File \"frozenlist/_frozenlist.pyx\", line 97, in frozenlist._frozenlist.FrozenList.append\n    self._check_frozen()\n  File \"frozenlist/_frozenlist.pyx\", line 19, in frozenlist._frozenlist.FrozenList._check_frozen\n    raise RuntimeError(\"Cannot modify frozen list.\")\nRuntimeError: Cannot modify frozen list.\nFrozenList is also hashable, but only when frozen. Otherwise it also throws a RuntimeError:\n>>> fl = FrozenList([17, 42, 'spam'])\n>>> hash(fl)\nTraceback (most recent call last):\n  File \"<stdin>\", line 1, in <module>\n  File \"frozenlist/_frozenlist.pyx\", line 111, in frozenlist._frozenlist.FrozenList.__hash__\n    raise RuntimeError(\"Cannot hash unfrozen list.\")\nRuntimeError: Cannot hash unfrozen list.\n>>> fl.freeze()\n>>> hash(fl)\n3713081631934410656\n>>> dictionary = {fl: 'Vikings'} # frozen fl can be a dict key\n>>> dictionary\n{<FrozenList(frozen=True, [1, 2])>: 'Vikings'}\n\nInstallation\n$ pip install frozenlist\n\nThe library requires Python 3.8 or newer.\n\nDocumentation\nhttps://frozenlist.aio-libs.org\n\nCommunication channels\nWe have a Matrix Space #aio-libs-space:matrix.org which is\nalso accessible via Gitter.\n\nRequirements\n\nPython >= 3.8\n\n\nLicense\nfrozenlist is offered under the Apache 2 license.\n\nSource code\nThe project is hosted on GitHub\nPlease file an issue in the bug tracker if you have found a bug\nor have some suggestions to improve the library.\n\n\n", "description": "Immutable list implementation."}, {"name": "fpdf", "readme": "\nPyFPDF is a library for PDF document generation under Python, ported\nfrom PHP (see FPDF \u201cFree\u201d-PDF, a well-known\nPDFlib-extension replacement with many examples, scripts and\nderivatives).\nCompared with other PDF libraries, PyFPDF is simple, small and\nversatile, with advanced capabilities and easy to learn, extend and\nmaintain.\n\nFeatures:\n\nPython 2.5 to 2.7 support (with experimental Python3 support)\nUnicode (UTF-8) TrueType font subset embedding\nBarcode I2of5 and code39, QR code coming soon \u2026\nPNG, GIF and JPG support (including transparency and alpha channel)\nTemplates with a visual designer & basic html2pdf\nExceptions support, other minor fixes, improvements and PEP8 code\ncleanups\n\n\n\nInstallation Instructions:\nTo get the latest development version you can download the source code\nrunning:\nhg clone https://code.google.com/p/pyfpdf/\ncd pyfpdf\npython setup.py install\nYou can also install PyFPDF from PyPI, with easyinstall or from Windows\ninstallers. For example, using pip:\npip install fpdf\nNote: Python Imaging\nLibrary (PIL) is needed for\nGIF support. PNG and JPG support is built-in and don\u2019t require any\nexternal dependency.\n\n\nDocumentation:\n\nTutorial: https://code.google.com/p/pyfpdf/wiki/Tutorial\nReference Manual:\nhttps://code.google.com/p/pyfpdf/wiki/ReferenceManual (spanish\ntranslation available)\n\nFor further information, see the project site:\nhttps://code.google.com/p/pyfpdf/ or the GitHub mirror:\nhttps://github.com/reingart/pyfpdf\n\n", "description": "PDF document generation with Python."}, {"name": "fonttools", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWhat is this?\nInstallation\nOptional Requirements\nHow to make a new release\nAcknowledgements\nCopyrights\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n \n \n \n\n\nWhat is this?\n\nfontTools is a library for manipulating fonts, written in Python. The\nproject includes the TTX tool, that can convert TrueType and OpenType\nfonts to and from an XML text format, which is also called TTX. It\nsupports TrueType, OpenType, AFM and to an extent Type 1 and some\nMac-specific formats. The project has an MIT open-source\nlicence.\nAmong other things this means you can use it free of charge.\n\nUser documentation and\ndeveloper documentation\nare available at Read the Docs.\n\nInstallation\nFontTools requires Python 3.8\nor later. We try to follow the same schedule of minimum Python version support as\nNumPy (see NEP 29).\nThe package is listed in the Python Package Index (PyPI), so you can\ninstall it with pip:\npip install fonttools\nIf you would like to contribute to its development, you can clone the\nrepository from GitHub, install the package in 'editable' mode and\nmodify the source code in place. We recommend creating a virtual\nenvironment, using virtualenv or\nPython 3 venv module.\n# download the source code to 'fonttools' folder\ngit clone https://github.com/fonttools/fonttools.git\ncd fonttools\n\n# create new virtual environment called e.g. 'fonttools-venv', or anything you like\npython -m virtualenv fonttools-venv\n\n# source the `activate` shell script to enter the environment (Un*x); to exit, just type `deactivate`\n. fonttools-venv/bin/activate\n\n# to activate the virtual environment in Windows `cmd.exe`, do\nfonttools-venv\\Scripts\\activate.bat\n\n# install in 'editable' mode\npip install -e .\n\nOptional Requirements\nThe fontTools package currently has no (required) external dependencies\nbesides the modules included in the Python Standard Library.\nHowever, a few extra dependencies are required by some of its modules, which\nare needed to unlock optional features.\nThe fonttools PyPI distribution also supports so-called \"extras\", i.e. a\nset of keywords that describe a group of additional dependencies, which can be\nused when installing via pip, or when specifying a requirement.\nFor example:\npip install fonttools[ufo,lxml,woff,unicode]\nThis command will install fonttools, as well as the optional dependencies that\nare required to unlock the extra features named \"ufo\", etc.\n\nLib/fontTools/misc/etree.py\nThe module exports a ElementTree-like API for reading/writing XML files, and\nallows to use as the backend either the built-in xml.etree module or\nlxml. The latter is preferred whenever present,\nas it is generally faster and more secure.\nExtra: lxml\n\nLib/fontTools/ufoLib\nPackage for reading and writing UFO source files; it requires:\n\nfs: (aka pyfilesystem2) filesystem\nabstraction layer.\nenum34: backport for the built-in enum\nmodule (only required on Python < 3.4).\n\nExtra: ufo\n\nLib/fontTools/ttLib/woff2.py\nModule to compress/decompress WOFF 2.0 web fonts; it requires:\n\nbrotli: Python bindings of\nthe Brotli compression library.\n\nExtra: woff\n\nLib/fontTools/ttLib/sfnt.py\nTo better compress WOFF 1.0 web fonts, the following module can be used\ninstead of the built-in zlib library:\n\nzopfli: Python bindings of\nthe Zopfli compression library.\n\nExtra: woff\n\nLib/fontTools/unicode.py\nTo display the Unicode character names when dumping the cmap table\nwith ttx we use the unicodedata module in the Standard Library.\nThe\u00a0version included in there varies between different Python versions.\nTo use the latest available data, you can install:\n\nunicodedata2:\nunicodedata backport for Python 3.x updated to the latest Unicode\nversion 15.0.\n\nExtra: unicode\n\nLib/fontTools/varLib/interpolatable.py\nModule for finding wrong\u00a0contour/component order between different masters.\nIt requires one of the following packages in order to solve the so-called\n\"minimum weight perfect matching problem in bipartite graphs\", or\nthe Assignment problem:\n\nscipy: the Scientific Library\nfor Python, which internally uses NumPy\narrays and hence is very fast;\nmunkres: a pure-Python\nmodule that implements the Hungarian or Kuhn-Munkres algorithm.\n\nExtra: interpolatable\n\nLib/fontTools/varLib/plot.py\nModule for visualizing DesignSpaceDocument and resulting VariationModel.\n\nmatplotlib: 2D plotting library.\n\nExtra: plot\n\nLib/fontTools/misc/symfont.py\nAdvanced module for symbolic font statistics analysis; it requires:\n\nsympy: the Python library for\nsymbolic mathematics.\n\nExtra: symfont\n\nLib/fontTools/t1Lib.py\nTo get the file creator\u00a0and type of Macintosh PostScript Type 1 fonts\non Python 3 you need to install the following module, as the old MacOS\nmodule is no longer included in Mac Python:\n\nxattr: Python wrapper for\nextended filesystem attributes (macOS platform only).\n\nExtra: type1\n\nLib/fontTools/ttLib/removeOverlaps.py\nSimplify TrueType glyphs by merging overlapping contours and components.\n\nskia-pathops: Python\nbindings for the Skia library's PathOps module, performing boolean\noperations on paths (union, intersection, etc.).\n\nExtra: pathops\n\nLib/fontTools/pens/cocoaPen.py and Lib/fontTools/pens/quartzPen.py\nPens for drawing glyphs with Cocoa NSBezierPath or CGPath require:\n\nPyObjC: the bridge between\nPython and the Objective-C runtime (macOS platform only).\n\n\nLib/fontTools/pens/qtPen.py\nPen for drawing glyphs with Qt's QPainterPath, requires:\n\nPyQt5: Python bindings for\nthe Qt\u00a0cross platform UI and application toolkit.\n\n\nLib/fontTools/pens/reportLabPen.py\nPen to drawing glyphs as PNG images, requires:\n\nreportlab: Python toolkit\nfor generating PDFs and graphics.\n\n\nLib/fontTools/pens/freetypePen.py\nPen to drawing glyphs with FreeType as raster images, requires:\n\nfreetype-py: Python binding\nfor the FreeType library.\n\n\nLib/fontTools/ttLib/tables/otBase.py\nUse the Harfbuzz library to serialize GPOS/GSUB using hb_repack method, requires:\n\nuharfbuzz: Streamlined Cython\nbindings for the harfbuzz shaping engine\n\nExtra: repacker\n\n\n\nHow to make a new release\n\nUpdate NEWS.rst with all the changes since the last release. Write a\nchangelog entry for each PR, with one or two short sentences summarizing it,\nas well as links to the PR and relevant issues addressed by the PR. Do not\nput a new title, the next command will do it for you.\nUse semantic versioning to decide whether the new release will be a 'major',\n'minor' or 'patch' release. It's usually one of the latter two, depending on\nwhether new backward compatible APIs were added, or simply some bugs were fixed.\nRun python setup.py release command from the tip of the main branch.\nBy default this bumps the third or 'patch' digit only, unless you pass --major\nor --minor to bump respectively the first or second digit.\nThis bumps the package version string, extracts the changes since the latest\nversion from NEWS.rst, and uses that text to create an annotated git tag\n(or a signed git tag if you pass the --sign option and your git and Github\naccount are configured for signing commits\nusing a GPG key).\nIt also commits an additional version bump which opens the main branch for\nthe subsequent developmental cycle\nPush both the tag and commit to the upstream repository, by running the command\ngit push --follow-tags. Note: it may push other local tags as well, be\ncareful.\nLet the CI build the wheel and source distribution packages and verify both\nget uploaded to the Python Package Index (PyPI).\n[Optional] Go to fonttools Github Releases\npage and create a new release, copy-pasting the content of the git tag\nmessage. This way, the release notes are nicely formatted as markdown, and\nusers watching the repo will get an email notification. One day we shall\nautomate that too.\n\n\nAcknowledgements\nIn alphabetical order:\naschmitz, Olivier Berten, Samyak Bhuta, Erik van Blokland, Petr van Blokland,\nJelle Bosma, Sascha Brawer, Tom Byrer, Antonio Cavedoni, Fr\u00e9d\u00e9ric\nCoiffier, Vincent Connare, David Corbett, Simon Cozens, Dave Crossland,\nSimon Daniels, Peter Dekkers, Behdad Esfahbod, Behnam Esfahbod, Hannes\nFamira, Sam Fishman, Matt Fontaine, Takaaki Fuji, Yannis Haralambous, Greg\nHitchcock, Jeremie Hornus, Khaled Hosny, John Hudson, Denis Moyogo Jacquerye,\nJack Jansen, Tom Kacvinsky, Jens Kutilek, Antoine Leca, Werner Lemberg, Tal\nLeming, Peter Lofting, Cosimo Lupo, Olli Meier, Masaya Nakamura, Dave Opstad,\nLaurence Penney, Roozbeh Pournader, Garret Rieger, Read Roberts, Colin Rofls,\nGuido van Rossum, Just van Rossum, Andreas Seidel, Georg Seifert, Chris\nSimpkins, Miguel Sousa, Adam Twardoch, Adrien T\u00e9tar, Vitaly Volkov,\nPaul Wise.\n\nCopyrights\n\nCopyright (c) 1999-2004 Just van Rossum, LettError\n(just@letterror.com)\nSee LICENSE for the full license.\n\nCopyright (c) 2000 BeOpen.com. All Rights Reserved.\nCopyright (c) 1995-2001 Corporation for National Research Initiatives.\nAll Rights Reserved.\nCopyright (c) 1991-1995 Stichting Mathematisch Centrum, Amsterdam. All\nRights Reserved.\nHave fun!\n\n\n", "description": "Library to manipulate font files."}, {"name": "folium", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nfolium\nPython Data, Leaflet.js Maps\nInstallation\nDocumentation\nGallery\nContributing\nChangelog\nPackages and plugins\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n \n \n \n \n\n\nfolium\n\n\nPython Data, Leaflet.js Maps\nfolium builds on the data wrangling strengths of the Python ecosystem and the\nmapping strengths of the Leaflet.js library. Manipulate your data in Python,\nthen visualize it in a Leaflet map via folium.\n\nInstallation\n$ pip install folium\nor\n$ conda install -c conda-forge folium\n\nDocumentation\nhttps://python-visualization.github.io/folium/\n\nGallery\nThere are two galleries of Jupyter notebooks with examples, which you can see\nusing Jupyter's nbviewer:\nhttps://nbviewer.jupyter.org/github/python-visualization/folium/tree/main/examples/\nhttps://nbviewer.org/github/python-visualization/folium_contrib/tree/main/notebooks/\n\nContributing\nWe love contributions!  folium is open source, built on open source,\nand we'd love to have you hang out in our community.\nSee our complete contributor's guide for more info.\n\nChangelog\nCheck the changelog for a detailed list of the latest changes.\n\nPackages and plugins\nPackages:\n\nhttps://github.com/geopandas/xyzservices: a repository of raster basemap tilesets.\nhttps://github.com/randyzwitch/streamlit-folium: run folium in a Streamlit app.\nhttps://github.com/FEMlium/FEMlium: interactive visualization of finite element simulations on geographic maps with folium.\n\nPlugins:\n\nhttps://github.com/onaci/folium-glify-layer: provide fast webgl rendering for large GeoJSON FeatureCollections\n\n\n\n", "description": "Python Data Visualization on Leaflet Maps."}, {"name": "flask", "readme": "\nFlask is a lightweight WSGI web application framework. It is designed\nto make getting started quick and easy, with the ability to scale up to\ncomplex applications. It began as a simple wrapper around Werkzeug\nand Jinja and has become one of the most popular Python web\napplication frameworks.\nFlask offers suggestions, but doesn\u2019t enforce any dependencies or\nproject layout. It is up to the developer to choose the tools and\nlibraries they want to use. There are many extensions provided by the\ncommunity that make adding new functionality easy.\n\nInstalling\nInstall and update using pip:\n$ pip install -U Flask\n\n\nA Simple Example\n# save this as app.py\nfrom flask import Flask\n\napp = Flask(__name__)\n\n@app.route(\"/\")\ndef hello():\n    return \"Hello, World!\"\n$ flask run\n  * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)\n\n\nContributing\nFor guidance on setting up a development environment and how to make a\ncontribution to Flask, see the contributing guidelines.\n\n\nDonate\nThe Pallets organization develops and supports Flask and the libraries\nit uses. In order to grow the community of contributors and users, and\nallow the maintainers to devote more time to the projects, please\ndonate today.\n\n\nLinks\n\nDocumentation: https://flask.palletsprojects.com/\nChanges: https://flask.palletsprojects.com/changes/\nPyPI Releases: https://pypi.org/project/Flask/\nSource Code: https://github.com/pallets/flask/\nIssue Tracker: https://github.com/pallets/flask/issues/\nChat: https://discord.gg/pallets\n\n\n", "description": "Micro web framework powered by Werkzeug and Jinja2."}, {"name": "Flask-Login", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nFlask-Login\nInstallation\nUsage\nContributing\n\n\n\n\n\nREADME.md\n\n\n\n\nFlask-Login\n\n\n\nFlask-Login provides user session management for Flask. It handles the common\ntasks of logging in, logging out, and remembering your users' sessions over\nextended periods of time.\nFlask-Login is not bound to any particular database system or permissions\nmodel. The only requirement is that your user objects implement a few methods,\nand that you provide a callback to the extension capable of loading users from\ntheir ID.\nInstallation\nInstall the extension with pip:\n$ pip install flask-login\nUsage\nOnce installed, the Flask-Login is easy to use. Let's walk through setting up\na basic application. Also please note that this is a very basic guide: we will\nbe taking shortcuts here that you should never take in a real application.\nTo begin we'll set up a Flask app:\nimport flask\n\napp = flask.Flask(__name__)\napp.secret_key = 'super secret string'  # Change this!\nFlask-Login works via a login manager. To kick things off, we'll set up the\nlogin manager by instantiating it and telling it about our Flask app:\nimport flask_login\n\nlogin_manager = flask_login.LoginManager()\n\nlogin_manager.init_app(app)\nTo keep things simple we're going to use a dictionary to represent a database\nof users. In a real application, this would be an actual persistence layer.\nHowever it's important to point out this is a feature of Flask-Login: it\ndoesn't care how your data is stored so long as you tell it how to retrieve it!\n# Our mock database.\nusers = {'foo@bar.tld': {'password': 'secret'}}\nWe also need to tell Flask-Login how to load a user from a Flask request and\nfrom its session. To do this we need to define our user object, a\nuser_loader callback, and a request_loader callback.\nclass User(flask_login.UserMixin):\n    pass\n\n\n@login_manager.user_loader\ndef user_loader(email):\n    if email not in users:\n        return\n\n    user = User()\n    user.id = email\n    return user\n\n\n@login_manager.request_loader\ndef request_loader(request):\n    email = request.form.get('email')\n    if email not in users:\n        return\n\n    user = User()\n    user.id = email\n    return user\nNow we're ready to define our views. We can start with a login view, which will\npopulate the session with authentication bits. After that we can define a view\nthat requires authentication.\n@app.route('/login', methods=['GET', 'POST'])\ndef login():\n    if flask.request.method == 'GET':\n        return '''\n               <form action='login' method='POST'>\n                <input type='text' name='email' id='email' placeholder='email'/>\n                <input type='password' name='password' id='password' placeholder='password'/>\n                <input type='submit' name='submit'/>\n               </form>\n               '''\n\n    email = flask.request.form['email']\n    if email in users and flask.request.form['password'] == users[email]['password']:\n        user = User()\n        user.id = email\n        flask_login.login_user(user)\n        return flask.redirect(flask.url_for('protected'))\n\n    return 'Bad login'\n\n\n@app.route('/protected')\n@flask_login.login_required\ndef protected():\n    return 'Logged in as: ' + flask_login.current_user.id\nFinally we can define a view to clear the session and log users out:\n@app.route('/logout')\ndef logout():\n    flask_login.logout_user()\n    return 'Logged out'\nWe now have a basic working application that makes use of session-based\nauthentication. To round things off, we should provide a callback for login\nfailures:\n@login_manager.unauthorized_handler\ndef unauthorized_handler():\n    return 'Unauthorized', 401\nDocumentation for Flask-Login is available on ReadTheDocs.\nFor complete understanding of available configuration, please refer to the source code.\nContributing\nWe welcome contributions! If you would like to hack on Flask-Login, please\nfollow these steps:\n\nFork this repository\nMake your changes\nInstall the dev requirements with pip install -r requirements/dev.txt\nSubmit a pull request after running tox (ensure it does not error!)\n\nPlease give us adequate time to review your submission. Thanks!\n\n\n"}, {"name": "Flask-Cors", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFlask-CORS\nInstallation\nUsage\nSimple Usage\nResource specific CORS\nRoute specific CORS via decorator\nDocumentation\nTroubleshooting\nTests\nContributing\nCredits\n\n\n\n\n\nREADME.rst\n\n\n\n\nFlask-CORS\n\n \n \n\n\n\nA Flask extension for handling Cross Origin Resource Sharing (CORS), making cross-origin AJAX possible.\nThis package has a simple philosophy: when you want to enable CORS, you wish to enable it for all use cases on a domain.\nThis means no mucking around with different allowed headers, methods, etc.\nBy default, submission of cookies across domains is disabled due to the security implications.\nPlease see the documentation for how to enable credential'ed requests, and please make sure you add some sort of CSRF protection before doing so!\n\nInstallation\nInstall the extension with using pip, or easy_install.\n$ pip install -U flask-cors\n\nUsage\nThis package exposes a Flask extension which by default enables CORS support on all routes, for all origins and methods.\nIt allows parameterization of all CORS headers on a per-resource level.\nThe package also contains a decorator, for those who prefer this approach.\n\nSimple Usage\nIn the simplest case, initialize the Flask-Cors extension with default arguments in order to allow CORS for all domains on all routes.\nSee the full list of options in the documentation.\nfrom flask import Flask\nfrom flask_cors import CORS\n\napp = Flask(__name__)\nCORS(app)\n\n@app.route(\"/\")\ndef helloWorld():\n  return \"Hello, cross-origin-world!\"\n\nResource specific CORS\nAlternatively, you can specify CORS options on a resource and origin level of granularity by passing a dictionary as the resources option, mapping paths to a set of options.\nSee the full list of options in the documentation.\napp = Flask(__name__)\ncors = CORS(app, resources={r\"/api/*\": {\"origins\": \"*\"}})\n\n@app.route(\"/api/v1/users\")\ndef list_users():\n  return \"user example\"\n\nRoute specific CORS via decorator\nThis extension also exposes a simple decorator to decorate flask routes with.\nSimply add @cross_origin() below a call to Flask's @app.route(..) to allow CORS on a given route.\nSee the full list of options in the decorator documentation.\n@app.route(\"/\")\n@cross_origin()\ndef helloWorld():\n  return \"Hello, cross-origin-world!\"\n\nDocumentation\nFor a full list of options, please see the full documentation\n\nTroubleshooting\nIf things aren't working as you expect, enable logging to help understand what is going on under the hood, and why.\nlogging.getLogger('flask_cors').level = logging.DEBUG\n\nTests\nA simple set of tests is included in test/.\nTo run, install nose, and simply invoke nosetests or python setup.py test to exercise the tests.\nIf nosetests does not work for you, due to it no longer working with newer python versions.\nYou can use pytest to run the tests instead.\n\nContributing\nQuestions, comments or improvements?\nPlease create an issue on Github, tweet at @corydolphin or send me an email.\nI do my best to include every contribution proposed in any way that I can.\n\nCredits\nThis Flask extension is based upon the Decorator for the HTTP Access Control written by Armin Ronacher.\n\n\n"}, {"name": "Flask-CacheBuster", "readme": "\n\nflask-cachebuster\nFlask-CacheBuster is a lightweight http://flask.pocoo.org/ extension that adds a hash to the URL query parameters of each static file. This lets you safely declare your static resources as indefinitely cacheable because they automatically get new URLs when their contents change.\n\n\nNotes:\nInspired by https://github.com/ChrisTM/Flask-CacheBust, and an updated version of https://github.com/daxlab/Flask-Cache-Buster to work with python 3.+\n\n\nInstallation\nUsing pip:\npip install flask-cachebuster\n\nUsage\nConfiguration:\nfrom flask_cachebuster import CacheBuster\n\nconfig = { 'extensions': ['.js', '.css', '.csv'], 'hash_size': 5 }\n\ncache_buster = CacheBuster(config=config)\n\ncache_buster.init_app(app)\n\n\nConfiguration\nConfiguration:\n* extensions - file extensions to bust\n* hash_size - looks something like this `/static/index.css%3Fq3` where [%3Fq3] is the hash size.\nThe http://flask.pocoo.org/docs/0.12/api/#flask.url_for function will now cache-bust your static files. For example, this template:\n<script src=\"{{ url_for('static', filename='js/main.js') }}\"></script>\nwill render like this:\n<script src=\"/static/js/main.js?%3Fq%3Dc5b5b2fa19\"></script>\n\n\n"}, {"name": "Fiona", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nFiona\nInstallation\nPython Usage\nCLI Usage\nDocumentation\n\n\n\n\n\nREADME.rst\n\n\n\n\nFiona\n\nFiona streams simple feature data to and from GIS formats like GeoPackage and\nShapefile.\nFiona can read and write real-world data using multi-layered GIS formats,\nzipped and in-memory virtual file systems, from files on your hard drive or in\ncloud storage. This project includes Python modules and a command line\ninterface (CLI).\nFiona depends on GDAL but is different from GDAL's own\nbindings. Fiona is designed to\nbe highly productive and to make it easy to write code which is easy to read.\n\nInstallation\nFiona has several extension modules which link against\nlibgdal. This complicates installation. Binary distributions (wheels)\ncontaining libgdal and its own dependencies are available from the Python\nPackage Index and can be installed using pip.\npip install fiona\nThese wheels are mainly intended to make installation easy for simple\napplications, not so much for production. They are not tested for compatibility\nwith all other binary wheels, conda packages, or QGIS, and omit many of GDAL's\noptional format drivers. If you need, for example, GML support you will need to\nbuild and install Fiona from a source distribution. It is possible to install\nFiona from source using pip (version >= 22.3) and the --no-binary option. A\nspecific GDAL installation can be selected by setting the GDAL_CONFIG\nenvironment variable.\npip install -U pip\npip install --no-binary fiona fiona\nMany users find Anaconda and conda-forge a good way to install Fiona and get\naccess to more optional format drivers (like GML).\nFiona 2.0 requires Python 3.7 or higher and GDAL 3.2 or higher.\n\nPython Usage\nFeatures are read from and written to file-like Collection objects returned\nfrom the fiona.open() function. Features are data classes modeled on the\nGeoJSON format. They don't have any spatial methods of their own, so if you\nwant to transform them you will need Shapely or something like it. Here is an\nexample of using Fiona to read some features from one data file, change their\ngeometry attributes using Shapely, and write them to a new data file.\nimport fiona\nfrom fiona import Feature, Geometry\nfrom shapely.geometry import mapping, shape\n\n# Open a file for reading. We'll call this the source.\nwith fiona.open(\n    \"zip+https://github.com/Toblerity/Fiona/files/11151652/coutwildrnp.zip\"\n) as src:\n\n    # The file we'll write to must be initialized with a coordinate\n    # system, a format driver name, and a record schema. We can get\n    # initial values from the open source's profile property and then\n    # modify them as we need.\n    profile = src.profile\n    profile[\"schema\"][\"geometry\"] = \"Point\"\n    profile[\"driver\"] = \"GPKG\"\n\n    # Open an output file, using the same format driver and coordinate\n    # reference system as the source. The profile mapping fills in the\n    # keyword parameters of fiona.open.\n    with fiona.open(\"centroids.gpkg\", \"w\", **profile) as dst:\n\n        # Process only the feature records intersecting a box.\n        for feat in src.filter(bbox=(-107.0, 37.0, -105.0, 39.0)):\n\n            # Get the feature's centroid.\n            centroid_shp = shape(feat.geometry).centroid\n            new_geom = Geometry.from_dict(centroid_shp)\n\n            # Write the feature out.\n            dst.write(\n                Feature(geometry=new_geom, properties=f.properties)\n            )\n\n    # The destination's contents are flushed to disk and the file is\n    # closed when its with block ends. This effectively\n    # executes ``dst.flush(); dst.close()``.\n\nCLI Usage\nFiona's command line interface, named \"fio\", is documented at docs/cli.rst. The CLI has a\nnumber of different commands. Its fio cat command streams GeoJSON features\nfrom any dataset.\n$ fio cat --compact tests/data/coutwildrnp.shp | jq -c '.'\n{\"geometry\":{\"coordinates\":[[[-111.73527526855469,41.995094299316406],...]]}}\n...\n\nDocumentation\nFor more details about this project, please see:\n\nFiona home page\nDocs and manual\nExamples\nMain user discussion group\nDevelopers discussion group\n\n\n\n", "description": "Python wrapper for vector data access functions from the OGR library."}, {"name": "filelock", "readme": "\n\n\n\nREADME.md\n\n\n\n\nfilelock\n\n\n\n\n\n\nFor more information checkout the official documentation.\n\n\n", "description": "Provides a platform independent file lock."}, {"name": "ffmpy", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nffmpy\nInstallation\nQuick example\nDocumentation\nLicense\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\n\n\n\n\nffmpy\nffmpy is a simplistic FFmpeg command line wrapper. It implements a Pythonic interface for FFmpeg command line compilation and uses Python subprocess module to execute compiled command line.\n\nInstallation\nYou guessed it:\npip install ffmpy\n\n\nQuick example\n>>> import ffmpy\n>>> ff = ffmpy.FFmpeg(\n...     inputs={'input.mp4': None},\n...     outputs={'output.avi': None}\n... )\n>>> ff.run()\nThis will take input.mp4 file in the current directory as the input, change the video container from MP4 to AVI without changing any other video parameters and create a new output file output.avi in the current directory.\n\nDocumentation\nhttp://ffmpy.rtfd.io\nSee Examples section for usage examples.\n\nLicense\nffmpy is licensed under the terms of MIT license\n\n\n", "description": "FFmpeg command line wrapper."}, {"name": "ffmpeg-python", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nffmpeg-python: Python bindings for FFmpeg\nOverview\nQuickstart\nAPI reference\nComplex filter graphs\nInstallation\nInstalling ffmpeg-python\nInstalling FFmpeg\nExamples\nCustom Filters\nFrequently asked questions\nContributing\nRunning tests\nSpecial thanks\nAdditional Resources\n\n\n\n\n\nREADME.md\n\n\n\n\nffmpeg-python: Python bindings for FFmpeg\n\n\nOverview\nThere are tons of Python FFmpeg wrappers out there but they seem to lack complex filter support.  ffmpeg-python works well for simple as well as complex signal graphs.\nQuickstart\nFlip a video horizontally:\nimport ffmpeg\nstream = ffmpeg.input('input.mp4')\nstream = ffmpeg.hflip(stream)\nstream = ffmpeg.output(stream, 'output.mp4')\nffmpeg.run(stream)\nOr if you prefer a fluent interface:\nimport ffmpeg\n(\n    ffmpeg\n    .input('input.mp4')\n    .hflip()\n    .output('output.mp4')\n    .run()\n)\nAPI reference\nComplex filter graphs\nFFmpeg is extremely powerful, but its command-line interface gets really complicated rather quickly - especially when working with signal graphs and doing anything more than trivial.\nTake for example a signal graph that looks like this:\n\nThe corresponding command-line arguments are pretty gnarly:\nffmpeg -i input.mp4 -i overlay.png -filter_complex \"[0]trim=start_frame=10:end_frame=20[v0];\\\n    [0]trim=start_frame=30:end_frame=40[v1];[v0][v1]concat=n=2[v2];[1]hflip[v3];\\\n    [v2][v3]overlay=eof_action=repeat[v4];[v4]drawbox=50:50:120:120:red:t=5[v5]\"\\\n    -map [v5] output.mp4\nMaybe this looks great to you, but if you're not an FFmpeg command-line expert, it probably looks alien.\nIf you're like me and find Python to be powerful and readable, it's easier with ffmpeg-python:\nimport ffmpeg\n\nin_file = ffmpeg.input('input.mp4')\noverlay_file = ffmpeg.input('overlay.png')\n(\n    ffmpeg\n    .concat(\n        in_file.trim(start_frame=10, end_frame=20),\n        in_file.trim(start_frame=30, end_frame=40),\n    )\n    .overlay(overlay_file.hflip())\n    .drawbox(50, 50, 120, 120, color='red', thickness=5)\n    .output('out.mp4')\n    .run()\n)\nffmpeg-python takes care of running ffmpeg with the command-line arguments that correspond to the above filter diagram, in familiar Python terms.\n\nReal-world signal graphs can get a heck of a lot more complex, but ffmpeg-python handles arbitrarily large (directed-acyclic) signal graphs.\nInstallation\nInstalling ffmpeg-python\nThe latest version of ffmpeg-python can be acquired via a typical pip install:\npip install ffmpeg-python\nOr the source can be cloned and installed from locally:\ngit clone git@github.com:kkroening/ffmpeg-python.git\npip install -e ./ffmpeg-python\n\nNote: ffmpeg-python makes no attempt to download/install FFmpeg, as ffmpeg-python is merely a pure-Python wrapper - whereas FFmpeg installation is platform-dependent/environment-specific, and is thus the responsibility of the user, as described below.\n\nInstalling FFmpeg\nBefore using ffmpeg-python, FFmpeg must be installed and accessible via the $PATH environment variable.\nThere are a variety of ways to install FFmpeg, such as the official download links, or using your package manager of choice (e.g. sudo apt install ffmpeg on Debian/Ubuntu, brew install ffmpeg on OS X, etc.).\nRegardless of how FFmpeg is installed, you can check if your environment path is set correctly by running the ffmpeg command from the terminal, in which case the version information should appear, as in the following example (truncated for brevity):\n$ ffmpeg\nffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers\n  built with gcc 9 (Ubuntu 9.3.0-10ubuntu2)\n\n\nNote: The actual version information displayed here may vary from one system to another; but if a message such as ffmpeg: command not found appears instead of the version information, FFmpeg is not properly installed.\n\nExamples\nWhen in doubt, take a look at the examples to see if there's something that's close to whatever you're trying to do.\nHere are a few:\n\n\nConvert video to numpy array\n\n\nGenerate thumbnail for video\n\n\nRead raw PCM audio via pipe\n\n\nJupyterLab/Notebook stream editor\n\n\n\n\nTensorflow/DeepDream streaming\n\n\nSee the Examples README for additional examples.\nCustom Filters\nDon't see the filter you're looking for?  While ffmpeg-python includes shorthand notation for some of the most commonly used filters (such as concat), all filters can be referenced via the .filter operator:\nstream = ffmpeg.input('dummy.mp4')\nstream = ffmpeg.filter(stream, 'fps', fps=25, round='up')\nstream = ffmpeg.output(stream, 'dummy2.mp4')\nffmpeg.run(stream)\nOr fluently:\n(\n    ffmpeg\n    .input('dummy.mp4')\n    .filter('fps', fps=25, round='up')\n    .output('dummy2.mp4')\n    .run()\n)\nSpecial option names:\nArguments with special names such as -qscale:v (variable bitrate), -b:v (constant bitrate), etc. can be specified as a keyword-args dictionary as follows:\n(\n    ffmpeg\n    .input('in.mp4')\n    .output('out.mp4', **{'qscale:v': 3})\n    .run()\n)\nMultiple inputs:\nFilters that take multiple input streams can be used by passing the input streams as an array to ffmpeg.filter:\nmain = ffmpeg.input('main.mp4')\nlogo = ffmpeg.input('logo.png')\n(\n    ffmpeg\n    .filter([main, logo], 'overlay', 10, 10)\n    .output('out.mp4')\n    .run()\n)\nMultiple outputs:\nFilters that produce multiple outputs can be used with .filter_multi_output:\nsplit = (\n    ffmpeg\n    .input('in.mp4')\n    .filter_multi_output('split')  # or `.split()`\n)\n(\n    ffmpeg\n    .concat(split[0], split[1].reverse())\n    .output('out.mp4')\n    .run()\n)\n(In this particular case, .split() is the equivalent shorthand, but the general approach works for other multi-output filters)\nString expressions:\nExpressions to be interpreted by ffmpeg can be included as string parameters and reference any special ffmpeg variable names:\n(\n    ffmpeg\n    .input('in.mp4')\n    .filter('crop', 'in_w-2*10', 'in_h-2*20')\n    .input('out.mp4')\n)\n\nWhen in doubt, refer to the existing filters, examples, and/or the official ffmpeg documentation.\nFrequently asked questions\nWhy do I get an import/attribute/etc. error from import ffmpeg?\nMake sure you ran pip install ffmpeg-python and not pip install ffmpeg (wrong) or pip install python-ffmpeg (also wrong).\nWhy did my audio stream get dropped?\nSome ffmpeg filters drop audio streams, and care must be taken to preserve the audio in the final output.  The .audio and .video operators can be used to reference the audio/video portions of a stream so that they can be processed separately and then re-combined later in the pipeline.\nThis dilemma is intrinsic to ffmpeg, and ffmpeg-python tries to stay out of the way while users may refer to the official ffmpeg documentation as to why certain filters drop audio.\nAs usual, take a look at the examples (Audio/video pipeline in particular).\nHow can I find out the used command line arguments?\nYou can run stream.get_args() before stream.run() to retrieve the command line arguments that will be passed to ffmpeg. You can also run stream.compile() that also includes the ffmpeg executable as the first argument.\nHow do I do XYZ?\nTake a look at each of the links in the Additional Resources section at the end of this README.  If you look everywhere and can't find what you're looking for and have a question that may be relevant to other users, you may open an issue asking how to do it, while providing a thorough explanation of what you're trying to do and what you've tried so far.\nIssues not directly related to ffmpeg-python or issues asking others to write your code for you or how to do the work of solving a complex signal processing problem for you that's not relevant to other users will be closed.\nThat said, we hope to continue improving our documentation and provide a community of support for people using ffmpeg-python to do cool and exciting things.\nContributing\n\nOne of the best things you can do to help make ffmpeg-python better is to answer open questions in the issue tracker.  The questions that are answered will be tagged and incorporated into the documentation, examples, and other learning resources.\nIf you notice things that could be better in the documentation or overall development experience, please say so in the issue tracker.  And of course, feel free to report any bugs or submit feature requests.\nPull requests are welcome as well, but it wouldn't hurt to touch base in the issue tracker or hop on the Matrix chat channel first.\nAnyone who fixes any of the open bugs or implements requested enhancements is a hero, but changes should include passing tests.\nRunning tests\ngit clone git@github.com:kkroening/ffmpeg-python.git\ncd ffmpeg-python\nvirtualenv venv\n. venv/bin/activate  # (OS X / Linux)\nvenv\\bin\\activate    # (Windows)\npip install -e .[dev]\npytest\n\nSpecial thanks\n\nFabrice Bellard\nThe FFmpeg team\nArne de Laat\nDavide Depau\nDim\nNoah Stier\n\nAdditional Resources\n\nAPI Reference\nExamples\nFilters\nFFmpeg Homepage\nFFmpeg Documentation\nFFmpeg Filters Documentation\nTest cases\nIssue tracker\nMatrix Chat: #ffmpeg-python:matrix.org\n\n\n\n"}, {"name": "fastprogress", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nfastprogress\nInstall\nUsage\nExample 1\nExample 2\nExample 3\n\n\n\n\n\nREADME.md\n\n\n\n\nfastprogress\nA fast and simple progress bar for Jupyter Notebook and console. Created by Sylvain Gugger for fast.ai.\n\nInstall\nTo install simply use\npip install fastprogress\n\nor:\nconda install -c fastai fastprogress\n\nNote that this requires python 3.6 or later.\nUsage\nExample 1\nHere is a simple example. Each bar takes an iterator as a main argument, and we can specify the second bar is nested with the first by adding the argument parent=mb. We can then:\n\nadd a comment in the first bar by changing the value of mb.main_bar.comment\nadd a comment in the second bar by changing the value of mb.child.comment\nwrite a line between the two bars with mb.write('message')\n\nfrom fastprogress.fastprogress import master_bar, progress_bar\nfrom time import sleep\nmb = master_bar(range(10))\nfor i in mb:\n    for j in progress_bar(range(100), parent=mb):\n        sleep(0.01)\n        mb.child.comment = f'second bar stat'\n    mb.main_bar.comment = f'first bar stat'\n    mb.write(f'Finished loop {i}.')\n    #mb.update_graph(graphs, x_bounds, y_bounds)\n\nExample 2\nTo add a graph that get plots as the training goes, just use the command mb.update_graphs. It will create the figure on its first use. Arguments are:\n\ngraphs: a list of graphs to be plotted (each of the form [x,y])\nx_bounds: the min and max values of the x axis (if None, it will those given by the graphs)\ny_bounds: the min and max values of the y axis (if None, it will those given by the graphs)\n\nNote that it's best to specify x_bounds and y_bounds, otherwise the box will change as the loop progresses.\nAdditionally, we can give the label of each graph via the command mb.names (should have as many elements as the graphs argument).\nimport numpy as np\nmb = master_bar(range(10))\nmb.names = ['cos', 'sin']\nfor i in mb:\n    for j in progress_bar(range(100), parent=mb):\n        if j%10 == 0:\n            k = 100 * i + j\n            x = np.arange(0, 2*k*np.pi/1000, 0.01)\n            y1, y2 = np.cos(x), np.sin(x)\n            graphs = [[x,y1], [x,y2]]\n            x_bounds = [0, 2*np.pi]\n            y_bounds = [-1,1]\n            mb.update_graph(graphs, x_bounds, y_bounds)\n            mb.child.comment = f'second bar stat'\n    mb.main_bar.comment = f'first bar stat'\n    mb.write(f'Finished loop {i}.')\n\nHere is the rendering in console:\n\nIf the script using this is executed with a redirect to a file, only the results of the .write method will be printed in that file.\nExample 3\nHere is an example that a typical machine learning training loop can use. It also demonstrates how to set y_bounds dynamically.\ndef plot_loss_update(epoch, epochs, mb, train_loss, valid_loss):\n    \"\"\" dynamically print the loss plot during the training/validation loop.\n        expects epoch to start from 1.\n    \"\"\"\n    x = range(1, epoch+1)\n    y = np.concatenate((train_loss, valid_loss))\n    graphs = [[x,train_loss], [x,valid_loss]]\n    x_margin = 0.2\n    y_margin = 0.05\n    x_bounds = [1-x_margin, epochs+x_margin]\n    y_bounds = [np.min(y)-y_margin, np.max(y)+y_margin]\n\n    mb.update_graph(graphs, x_bounds, y_bounds)\n\nAnd here is an emulation of a training loop that uses this function:\nfrom fastprogress.fastprogress import master_bar, progress_bar\nfrom time import sleep\nimport numpy as np\nimport random\n\nepochs = 5\nmb = master_bar(range(1, epochs+1))\n# optional: graph legend: if not set, the default is 'train'/'valid'\n# mb.names = ['first', 'second']\ntrain_loss, valid_loss = [], []\nfor epoch in mb:\n    # emulate train sub-loop\n    for batch in progress_bar(range(2), parent=mb): sleep(0.2)\n    train_loss.append(0.5 - 0.06 * epoch + random.uniform(0, 0.04))\n\n    # emulate validation sub-loop\n    for batch in progress_bar(range(2), parent=mb): sleep(0.2)\n    valid_loss.append(0.5 - 0.03 * epoch + random.uniform(0, 0.04))\n\n    plot_loss_update(epoch, epochs, mb, train_loss, valid_loss)\n\nAnd the output:\n\n\nCopyright 2017 onwards, fast.ai. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. A copy of the License is provided in the LICENSE file in this repository.\n\n\n", "description": "Progress bar for Jupyter Notebook and console."}, {"name": "fastjsonschema", "readme": "\n\n\n\nREADME.rst\n\n\n\n\nFast JSON schema for Python\n\n \n\nSee documentation.\n\n\n", "description": "Fastest JSON schema validator for Python."}, {"name": "fastapi", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSponsors\nOpinions\nTyper, the FastAPI of CLIs\nRequirements\nInstallation\nExample\nCreate it\nRun it\nCheck it\nInteractive API docs\nAlternative API docs\nExample upgrade\nInteractive API docs upgrade\nAlternative API docs upgrade\nRecap\nPerformance\nOptional Dependencies\nLicense\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\n\n\nFastAPI framework, high performance, easy to learn, fast to code, ready for production\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nDocumentation: https://fastapi.tiangolo.com\nSource Code: https://github.com/tiangolo/fastapi\n\nFastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.7+ based on standard Python type hints.\nThe key features are:\n\nFast: Very high performance, on par with NodeJS and Go (thanks to Starlette and Pydantic). One of the fastest Python frameworks available.\nFast to code: Increase the speed to develop features by about 200% to 300%. *\nFewer bugs: Reduce about 40% of human (developer) induced errors. *\nIntuitive: Great editor support. Completion everywhere. Less time debugging.\nEasy: Designed to be easy to use and learn. Less time reading docs.\nShort: Minimize code duplication. Multiple features from each parameter declaration. Fewer bugs.\nRobust: Get production-ready code. With automatic interactive documentation.\nStandards-based: Based on (and fully compatible with) the open standards for APIs: OpenAPI (previously known as Swagger) and JSON Schema.\n\n* estimation based on tests on an internal development team, building production applications.\nSponsors\n\n\n\n\n\n\n\n\n\n\n\n\nOther sponsors\nOpinions\n\"[...] I'm using FastAPI a ton these days. [...] I'm actually planning to use it for all of my team's ML services at Microsoft. Some of them are getting integrated into the core Windows product and some Office products.\"\nKabir Khan - Microsoft (ref)\n\n\"We adopted the FastAPI library to spawn a REST server that can be queried to obtain predictions. [for Ludwig]\"\nPiero Molino, Yaroslav Dudin, and Sai Sumanth Miryala - Uber (ref)\n\n\"Netflix is pleased to announce the open-source release of our crisis management orchestration framework: Dispatch! [built with FastAPI]\"\nKevin Glisson, Marc Vilanova, Forest Monsen - Netflix (ref)\n\n\"I\u2019m over the moon excited about FastAPI. It\u2019s so fun!\"\nBrian Okken - Python Bytes podcast host (ref)\n\n\"Honestly, what you've built looks super solid and polished. In many ways, it's what I wanted Hug to be - it's really inspiring to see someone build that.\"\nTimothy Crosley - Hug creator (ref)\n\n\"If you're looking to learn one modern framework for building REST APIs, check out FastAPI [...] It's fast, easy to use and easy to learn [...]\"\n\"We've switched over to FastAPI for our APIs [...] I think you'll like it [...]\"\nInes Montani - Matthew Honnibal - Explosion AI founders - spaCy creators (ref) - (ref)\n\n\"If anyone is looking to build a production Python API, I would highly recommend FastAPI. It is beautifully designed, simple to use and highly scalable, it has become a key component in our API first development strategy and is driving many automations and services such as our Virtual TAC Engineer.\"\nDeon Pillsbury - Cisco (ref)\n\nTyper, the FastAPI of CLIs\n\nIf you are building a CLI app to be used in the terminal instead of a web API, check out Typer.\nTyper is FastAPI's little sibling. And it's intended to be the FastAPI of CLIs. \u2328\ufe0f \ud83d\ude80\nRequirements\nPython 3.7+\nFastAPI stands on the shoulders of giants:\n\nStarlette for the web parts.\nPydantic for the data parts.\n\nInstallation\n\n$ pip install fastapi\n\n---> 100%\n\nYou will also need an ASGI server, for production such as Uvicorn or Hypercorn.\n\n$ pip install \"uvicorn[standard]\"\n\n---> 100%\n\nExample\nCreate it\n\nCreate a file main.py with:\n\nfrom typing import Union\n\nfrom fastapi import FastAPI\n\napp = FastAPI()\n\n\n@app.get(\"/\")\ndef read_root():\n    return {\"Hello\": \"World\"}\n\n\n@app.get(\"/items/{item_id}\")\ndef read_item(item_id: int, q: Union[str, None] = None):\n    return {\"item_id\": item_id, \"q\": q}\n\nOr use async def...\nIf your code uses async / await, use async def:\nfrom typing import Union\n\nfrom fastapi import FastAPI\n\napp = FastAPI()\n\n\n@app.get(\"/\")\nasync def read_root():\n    return {\"Hello\": \"World\"}\n\n\n@app.get(\"/items/{item_id}\")\nasync def read_item(item_id: int, q: Union[str, None] = None):\n    return {\"item_id\": item_id, \"q\": q}\nNote:\nIf you don't know, check the \"In a hurry?\" section about async and await in the docs.\n\nRun it\nRun the server with:\n\n$ uvicorn main:app --reload\n\nINFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)\nINFO:     Started reloader process [28720]\nINFO:     Started server process [28722]\nINFO:     Waiting for application startup.\nINFO:     Application startup complete.\n\n\nAbout the command uvicorn main:app --reload...\nThe command uvicorn main:app refers to:\n\nmain: the file main.py (the Python \"module\").\napp: the object created inside of main.py with the line app = FastAPI().\n--reload: make the server restart after code changes. Only do this for development.\n\n\nCheck it\nOpen your browser at http://127.0.0.1:8000/items/5?q=somequery.\nYou will see the JSON response as:\n{\"item_id\": 5, \"q\": \"somequery\"}\nYou already created an API that:\n\nReceives HTTP requests in the paths / and /items/{item_id}.\nBoth paths take GET operations (also known as HTTP methods).\nThe path /items/{item_id} has a path parameter item_id that should be an int.\nThe path /items/{item_id} has an optional str query parameter q.\n\nInteractive API docs\nNow go to http://127.0.0.1:8000/docs.\nYou will see the automatic interactive API documentation (provided by Swagger UI):\n\nAlternative API docs\nAnd now, go to http://127.0.0.1:8000/redoc.\nYou will see the alternative automatic documentation (provided by ReDoc):\n\nExample upgrade\nNow modify the file main.py to receive a body from a PUT request.\nDeclare the body using standard Python types, thanks to Pydantic.\nfrom typing import Union\n\nfrom fastapi import FastAPI\nfrom pydantic import BaseModel\n\napp = FastAPI()\n\n\nclass Item(BaseModel):\n    name: str\n    price: float\n    is_offer: Union[bool, None] = None\n\n\n@app.get(\"/\")\ndef read_root():\n    return {\"Hello\": \"World\"}\n\n\n@app.get(\"/items/{item_id}\")\ndef read_item(item_id: int, q: Union[str, None] = None):\n    return {\"item_id\": item_id, \"q\": q}\n\n\n@app.put(\"/items/{item_id}\")\ndef update_item(item_id: int, item: Item):\n    return {\"item_name\": item.name, \"item_id\": item_id}\nThe server should reload automatically (because you added --reload to the uvicorn command above).\nInteractive API docs upgrade\nNow go to http://127.0.0.1:8000/docs.\n\nThe interactive API documentation will be automatically updated, including the new body:\n\n\n\nClick on the button \"Try it out\", it allows you to fill the parameters and directly interact with the API:\n\n\n\nThen click on the \"Execute\" button, the user interface will communicate with your API, send the parameters, get the results and show them on the screen:\n\n\nAlternative API docs upgrade\nAnd now, go to http://127.0.0.1:8000/redoc.\n\nThe alternative documentation will also reflect the new query parameter and body:\n\n\nRecap\nIn summary, you declare once the types of parameters, body, etc. as function parameters.\nYou do that with standard modern Python types.\nYou don't have to learn a new syntax, the methods or classes of a specific library, etc.\nJust standard Python 3.7+.\nFor example, for an int:\nitem_id: int\nor for a more complex Item model:\nitem: Item\n...and with that single declaration you get:\n\nEditor support, including:\n\nCompletion.\nType checks.\n\n\nValidation of data:\n\nAutomatic and clear errors when the data is invalid.\nValidation even for deeply nested JSON objects.\n\n\nConversion of input data: coming from the network to Python data and types. Reading from:\n\nJSON.\nPath parameters.\nQuery parameters.\nCookies.\nHeaders.\nForms.\nFiles.\n\n\nConversion of output data: converting from Python data and types to network data (as JSON):\n\nConvert Python types (str, int, float, bool, list, etc).\ndatetime objects.\nUUID objects.\nDatabase models.\n...and many more.\n\n\nAutomatic interactive API documentation, including 2 alternative user interfaces:\n\nSwagger UI.\nReDoc.\n\n\n\n\nComing back to the previous code example, FastAPI will:\n\nValidate that there is an item_id in the path for GET and PUT requests.\nValidate that the item_id is of type int for GET and PUT requests.\n\nIf it is not, the client will see a useful, clear error.\n\n\nCheck if there is an optional query parameter named q (as in http://127.0.0.1:8000/items/foo?q=somequery) for GET requests.\n\nAs the q parameter is declared with = None, it is optional.\nWithout the None it would be required (as is the body in the case with PUT).\n\n\nFor PUT requests to /items/{item_id}, Read the body as JSON:\n\nCheck that it has a required attribute name that should be a str.\nCheck that it has a required attribute price that has to be a float.\nCheck that it has an optional attribute is_offer, that should be a bool, if present.\nAll this would also work for deeply nested JSON objects.\n\n\nConvert from and to JSON automatically.\nDocument everything with OpenAPI, that can be used by:\n\nInteractive documentation systems.\nAutomatic client code generation systems, for many languages.\n\n\nProvide 2 interactive documentation web interfaces directly.\n\n\nWe just scratched the surface, but you already get the idea of how it all works.\nTry changing the line with:\n    return {\"item_name\": item.name, \"item_id\": item_id}\n...from:\n        ... \"item_name\": item.name ...\n...to:\n        ... \"item_price\": item.price ...\n...and see how your editor will auto-complete the attributes and know their types:\n\nFor a more complete example including more features, see the Tutorial - User Guide.\nSpoiler alert: the tutorial - user guide includes:\n\nDeclaration of parameters from other different places as: headers, cookies, form fields and files.\nHow to set validation constraints as maximum_length or regex.\nA very powerful and easy to use Dependency Injection system.\nSecurity and authentication, including support for OAuth2 with JWT tokens and HTTP Basic auth.\nMore advanced (but equally easy) techniques for declaring deeply nested JSON models (thanks to Pydantic).\nGraphQL integration with Strawberry and other libraries.\nMany extra features (thanks to Starlette) as:\n\nWebSockets\nextremely easy tests based on HTTPX and pytest\nCORS\nCookie Sessions\n...and more.\n\n\n\nPerformance\nIndependent TechEmpower benchmarks show FastAPI applications running under Uvicorn as one of the fastest Python frameworks available, only below Starlette and Uvicorn themselves (used internally by FastAPI). (*)\nTo understand more about it, see the section Benchmarks.\nOptional Dependencies\nUsed by Pydantic:\n\nemail_validator - for email validation.\npydantic-settings - for settings management.\npydantic-extra-types - for extra types to be used with Pydantic.\n\nUsed by Starlette:\n\nhttpx - Required if you want to use the TestClient.\njinja2 - Required if you want to use the default template configuration.\npython-multipart - Required if you want to support form \"parsing\", with request.form().\nitsdangerous - Required for SessionMiddleware support.\npyyaml - Required for Starlette's SchemaGenerator support (you probably don't need it with FastAPI).\nujson - Required if you want to use UJSONResponse.\n\nUsed by FastAPI / Starlette:\n\nuvicorn - for the server that loads and serves your application.\norjson - Required if you want to use ORJSONResponse.\n\nYou can install all of these with pip install \"fastapi[all]\".\nLicense\nThis project is licensed under the terms of the MIT license.\n\n\n", "description": "High performance web framework for building APIs with Python 3.7+.", "category": "Web"}, {"name": "Faker", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nCompatibility\nBasic Usage\nPytest fixtures\nProviders\nLocalization\nOptimizations\nCommand line usage\nHow to create a Provider\nHow to create a Dynamic Provider\nHow to customize the Lorem Provider\nHow to use with Factory Boy\nAccessing the random instance\nUnique values\nSeeding the Generator\nTests\nContribute\nLicense\nCredits\n\n\n\n\n\nREADME.rst\n\n\n\n\nFaker is a Python package that generates fake data for you. Whether\nyou need to bootstrap your database, create good-looking XML documents,\nfill-in your persistence to stress test it, or anonymize data taken from\na production service, Faker is for you.\nFaker is heavily inspired by PHP Faker, Perl Faker, and by Ruby Faker.\n\n_|_|_|_|          _|\n_|        _|_|_|  _|  _|      _|_|    _|  _|_|\n_|_|_|  _|    _|  _|_|      _|_|_|_|  _|_|\n_|      _|    _|  _|  _|    _|        _|\n_|        _|_|_|  _|    _|    _|_|_|  _|\n\n   \n\n\nCompatibility\nStarting from version 4.0.0, Faker dropped support for Python 2 and from version 5.0.0\nonly supports Python 3.7 and above. If you still need Python 2 compatibility, please install version 3.0.1 in the\nmeantime, and please consider updating your codebase to support Python 3 so you can enjoy the\nlatest features Faker has to offer. Please see the extended docs for more details, especially\nif you are upgrading from version 2.0.4 and below as there might be breaking changes.\nThis package was also previously called fake-factory which was already deprecated by the end\nof 2016, and much has changed since then, so please ensure that your project and its dependencies\ndo not depend on the old package.\n\nBasic Usage\nInstall with pip:\npip install Faker\nUse faker.Faker() to create and initialize a faker\ngenerator, which can generate data by accessing properties named after\nthe type of data you want.\nfrom faker import Faker\nfake = Faker()\n\nfake.name()\n# 'Lucy Cechtelar'\n\nfake.address()\n# '426 Jordy Lodge\n#  Cartwrightshire, SC 88120-6700'\n\nfake.text()\n# 'Sint velit eveniet. Rerum atque repellat voluptatem quia rerum. Numquam excepturi\n#  beatae sint laudantium consequatur. Magni occaecati itaque sint et sit tempore. Nesciunt\n#  amet quidem. Iusto deleniti cum autem ad quia aperiam.\n#  A consectetur quos aliquam. In iste aliquid et aut similique suscipit. Consequatur qui\n#  quaerat iste minus hic expedita. Consequuntur error magni et laboriosam. Aut aspernatur\n#  voluptatem sit aliquam. Dolores voluptatum est.\n#  Aut molestias et maxime. Fugit autem facilis quos vero. Eius quibusdam possimus est.\n#  Ea quaerat et quisquam. Deleniti sunt quam. Adipisci consequatur id in occaecati.\n#  Et sint et. Ut ducimus quod nemo ab voluptatum.'\nEach call to method fake.name() yields a different (random) result.\nThis is because faker forwards faker.Generator.method_name() calls\nto faker.Generator.format(method_name).\nfor _ in range(10):\n  print(fake.name())\n\n# 'Adaline Reichel'\n# 'Dr. Santa Prosacco DVM'\n# 'Noemy Vandervort V'\n# 'Lexi O'Conner'\n# 'Gracie Weber'\n# 'Roscoe Johns'\n# 'Emmett Lebsack'\n# 'Keegan Thiel'\n# 'Wellington Koelpin II'\n# 'Ms. Karley Kiehn V'\n\nPytest fixtures\nFaker also has its own pytest plugin which provides a faker fixture you can use in your\ntests. Please check out the pytest fixture docs to learn more.\n\nProviders\nEach of the generator properties (like name, address, and\nlorem) are called \"fake\". A faker generator has many of them,\npackaged in \"providers\".\nfrom faker import Faker\nfrom faker.providers import internet\n\nfake = Faker()\nfake.add_provider(internet)\n\nprint(fake.ipv4_private())\nCheck the extended docs for a list of bundled providers and a list of\ncommunity providers.\n\nLocalization\nfaker.Faker can take a locale as an argument, to return localized\ndata. If no localized provider is found, the factory falls back to the\ndefault LCID string for US english, ie: en_US.\nfrom faker import Faker\nfake = Faker('it_IT')\nfor _ in range(10):\n    print(fake.name())\n\n# 'Elda Palumbo'\n# 'Pacifico Giordano'\n# 'Sig. Avide Guerra'\n# 'Yago Amato'\n# 'Eustachio Messina'\n# 'Dott. Violante Lombardo'\n# 'Sig. Alighieri Monti'\n# 'Costanzo Costa'\n# 'Nazzareno Barbieri'\n# 'Max Coppola'\nfaker.Faker also supports multiple locales. New in v3.0.0.\nfrom faker import Faker\nfake = Faker(['it_IT', 'en_US', 'ja_JP'])\nfor _ in range(10):\n    print(fake.name())\n\n# \u9234\u6728 \u967d\u4e00\n# Leslie Moreno\n# Emma Williams\n# \u6e21\u8fba \u88d5\u7f8e\u5b50\n# Marcantonio Galuppi\n# Martha Davis\n# Kristen Turner\n# \u4e2d\u6d25\u5ddd \u6625\u9999\n# Ashley Castillo\n# \u5c71\u7530 \u6843\u5b50\nYou can check available Faker locales in the source code, under the\nproviders package. The localization of Faker is an ongoing process, for\nwhich we need your help. Please don't hesitate to create a localized\nprovider for your own locale and submit a Pull Request (PR).\n\nOptimizations\nThe Faker constructor takes a performance-related argument called\nuse_weighting. It specifies whether to attempt to have the frequency\nof values match real-world frequencies (e.g. the English name Gary would\nbe much more frequent than the name Lorimer). If use_weighting is False,\nthen all items have an equal chance of being selected, and the selection\nprocess is much faster. The default is True.\n\nCommand line usage\nWhen installed, you can invoke faker from the command-line:\nfaker [-h] [--version] [-o output]\n      [-l {bg_BG,cs_CZ,...,zh_CN,zh_TW}]\n      [-r REPEAT] [-s SEP]\n      [-i {package.containing.custom_provider otherpkg.containing.custom_provider}]\n      [fake] [fake argument [fake argument ...]]\nWhere:\n\nfaker: is the script when installed in your environment, in\ndevelopment you could use python -m faker instead\n-h, --help: shows a help message\n--version: shows the program's version number\n-o FILENAME: redirects the output to the specified filename\n-l {bg_BG,cs_CZ,...,zh_CN,zh_TW}: allows use of a localized\nprovider\n-r REPEAT: will generate a specified number of outputs\n-s SEP: will generate the specified separator after each\ngenerated output\n-i {my.custom_provider other.custom_provider} list of additional custom\nproviders to use. Note that is the import path of the package containing\nyour Provider class, not the custom Provider class itself.\nfake: is the name of the fake to generate an output for, such as\nname, address, or text\n[fake argument ...]: optional arguments to pass to the fake (e.g. the\nprofile fake takes an optional list of comma separated field names as the\nfirst argument)\n\nExamples:\n$ faker address\n968 Bahringer Garden Apt. 722\nKristinaland, NJ 09890\n\n$ faker -l de_DE address\nSamira-Niemeier-Allee 56\n94812 Biedenkopf\n\n$ faker profile ssn,birthdate\n{'ssn': '628-10-1085', 'birthdate': '2008-03-29'}\n\n$ faker -r=3 -s=\";\" name\nWillam Kertzmann;\nJosiah Maggio;\nGayla Schmitt;\n\nHow to create a Provider\nfrom faker import Faker\nfake = Faker()\n\n# first, import a similar Provider or use the default one\nfrom faker.providers import BaseProvider\n\n# create new provider class\nclass MyProvider(BaseProvider):\n    def foo(self) -> str:\n        return 'bar'\n\n# then add new provider to faker instance\nfake.add_provider(MyProvider)\n\n# now you can use:\nfake.foo()\n# 'bar'\n\nHow to create a Dynamic Provider\nDynamic providers can read elements from an external source.\nfrom faker import Faker\nfrom faker.providers import DynamicProvider\n\nmedical_professions_provider = DynamicProvider(\n     provider_name=\"medical_profession\",\n     elements=[\"dr.\", \"doctor\", \"nurse\", \"surgeon\", \"clerk\"],\n)\n\nfake = Faker()\n\n# then add new provider to faker instance\nfake.add_provider(medical_professions_provider)\n\n# now you can use:\nfake.medical_profession()\n# 'dr.'\n\nHow to customize the Lorem Provider\nYou can provide your own sets of words if you don't want to use the\ndefault lorem ipsum one. The following example shows how to do it with a list of words picked from cakeipsum :\nfrom faker import Faker\nfake = Faker()\n\nmy_word_list = [\n'danish','cheesecake','sugar',\n'Lollipop','wafer','Gummies',\n'sesame','Jelly','beans',\n'pie','bar','Ice','oat' ]\n\nfake.sentence()\n# 'Expedita at beatae voluptatibus nulla omnis.'\n\nfake.sentence(ext_word_list=my_word_list)\n# 'Oat beans oat Lollipop bar cheesecake.'\n\nHow to use with Factory Boy\nFactory Boy already ships with integration with Faker. Simply use the\nfactory.Faker method of factory_boy:\nimport factory\nfrom myapp.models import Book\n\nclass BookFactory(factory.Factory):\n    class Meta:\n        model = Book\n\n    title = factory.Faker('sentence', nb_words=4)\n    author_name = factory.Faker('name')\n\nAccessing the random instance\nThe .random property on the generator returns the instance of\nrandom.Random used to generate the values:\nfrom faker import Faker\nfake = Faker()\nfake.random\nfake.random.getstate()\nBy default all generators share the same instance of random.Random, which\ncan be accessed with from faker.generator import random. Using this may\nbe useful for plugins that want to affect all faker instances.\n\nUnique values\nThrough use of the .unique property on the generator, you can guarantee\nthat any generated values are unique for this specific instance.\nfrom faker import Faker\nfake = Faker()\nnames = [fake.unique.first_name() for i in range(500)]\nassert len(set(names)) == len(names)\nCalling fake.unique.clear() clears the already seen values.\nNote, to avoid infinite loops, after a number of attempts to find a unique\nvalue, Faker will throw a UniquenessException. Beware of the birthday\nparadox, collisions\nare more likely than you'd think.\nfrom faker import Faker\n\nfake = Faker()\nfor i in range(3):\n     # Raises a UniquenessException\n     fake.unique.boolean()\nIn addition, only hashable arguments and return values can be used\nwith .unique.\n\nSeeding the Generator\nWhen using Faker for unit testing, you will often want to generate the same\ndata set. For convenience, the generator also provide a seed() method,\nwhich seeds the shared random number generator. Seed produces the same result\nwhen the same methods with the same version of faker are called.\nfrom faker import Faker\nfake = Faker()\nFaker.seed(4321)\n\nprint(fake.name())\n# 'Margaret Boehm'\nEach generator can also be switched to its own instance of random.Random,\nseparate to the shared one, by using the seed_instance() method, which acts\nthe same way. For example:\nfrom faker import Faker\nfake = Faker()\nfake.seed_instance(4321)\n\nprint(fake.name())\n# 'Margaret Boehm'\nPlease note that as we keep updating datasets, results are not guaranteed to be\nconsistent across patch versions. If you hardcode results in your test, make sure\nyou pinned the version of Faker down to the patch number.\nIf you are using pytest, you can seed the faker fixture by defining a faker_seed\nfixture. Please check out the pytest fixture docs to learn more.\n\nTests\nRun tests:\n$ tox\nWrite documentation for the providers of the default locale:\n$ python -m faker > docs.txt\nWrite documentation for the providers of a specific locale:\n$ python -m faker --lang=de_DE > docs_de.txt\n\nContribute\nPlease see CONTRIBUTING.\n\nLicense\nFaker is released under the MIT License. See the bundled LICENSE file\nfor details.\n\nCredits\n\nFZaninotto / PHP Faker\nDistribute\nBuildout\nmodern-package-template\n\n\n\n", "description": "Generate fake data for testing and populating databases."}, {"name": "extract-msg", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nextract-msg\nNOTICE\nChangelog\nUsage\nError Reporting\nSupporting The Module\nInstallation\nVersioning\nTodo\nCredits\nExtra\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n \n \n \n\nextract-msg\nExtracts emails and attachments saved in Microsoft Outlook's .msg files\nThe python package extract_msg automates the extraction of key email\ndata (from, to, cc, date, subject, body) and the email's attachments.\nDocumentation can be found in the code, on the wiki, and on the\nRead the Docs page.\n\nNOTICE\n0.29.* is the branch that supports both Python 2 and Python 3. It is now only\nreceiving bug fixes and will not be receiving feature updates.\n0.39.* is the last versions that supported Python 3.6 and 3.7. Support for those\nwas dropped to allow the use of new features from 3.8 and because the life spans\nof those versions had ended.\nThis module has a Discord server for general discussion. You can find it here:\nDiscord\n\nChangelog\n\nChangelog\n\n\nUsage\nTo use it as a command-line script:\npython -m extract_msg example.msg\n\nThis will produce a new folder named according to the date, time and\nsubject of the message (for example \"2013-07-24_0915 Example\"). The\nemail itself can be found inside the new folder along with the\nattachments.\nThe script uses Philippe Lagadec's Python module that reads Microsoft\nOLE2 files (also called Structured Storage, Compound File Binary Format\nor Compound Document File Format). This is the underlying format of\nOutlook's .msg files. This library currently supports Python 3.8 and above.\nThe script was originally built using Peter Fiskerstrand's documentation of the\n.msg format. Redemption's discussion of the different property types used within\nExtended MAPI was also useful. For future reference, note that Microsoft have\nopened up their documentation of the file format, which is what is currently\nbeing used for development.\n#########REWRITE COMMAND LINE USAGE#############\nCurrently, the README is in the process of being redone. For now, please\nrefer to the usage information provided from the program's help dialog:\nusage: extract_msg [-h] [--use-content-id] [--json] [--file-logging] [-v] [--log LOG] [--config CONFIGPATH] [--out OUTPATH] [--use-filename] [--dump-stdout] [--html] [--pdf] [--wk-path WKPATH] [--wk-options [WKOPTIONS ...]]\n                   [--prepared-html] [--charset CHARSET] [--raw] [--rtf] [--allow-fallback] [--skip-body-not-found] [--zip ZIP] [--save-header] [--attachments-only] [--skip-hidden] [--no-folders] [--skip-embedded] [--extract-embedded]\n                   [--overwrite-existing] [--skip-not-implemented] [--out-name OUTNAME | --glob] [--ignore-rtfde] [--progress]\n                   msg [msg ...]\n\nextract_msg: Extracts emails and attachments saved in Microsoft Outlook's .msg files. https://github.com/TeamMsgExtractor/msg-extractor\n\npositional arguments:\n  msg                   An MSG file to be parsed.\n\noptions:\n  -h, --help            show this help message and exit\n  --use-content-id, --cid\n                        Save attachments by their Content ID, if they have one. Useful when working with the HTML body.\n  --json                Changes to write output files as json.\n  --file-logging        Enables file logging. Implies --verbose level 1.\n  -v, --verbose         Turns on console logging. Specify more than once for higher verbosity.\n  --log LOG             Set the path to write the file log to.\n  --config CONFIGPATH   Set the path to load the logging config from.\n  --out OUTPATH         Set the folder to use for the program output. (Default: Current directory)\n  --use-filename        Sets whether the name of each output is based on the msg filename.\n  --dump-stdout         Tells the program to dump the message body (plain text) to stdout. Overrides saving arguments.\n  --html                Sets whether the output should be HTML. If this is not possible, will error.\n  --pdf                 Saves the body as a PDF. If this is not possible, will error.\n  --wk-path WKPATH      Overrides the path for finding wkhtmltopdf.\n  --wk-options [WKOPTIONS ...]\n                        Sets additional options to be used in wkhtmltopdf. Should be a series of options and values, replacing the - or -- in the beginning with + or ++, respectively. For example: --wk-options \"+O Landscape\"\n  --prepared-html       When used in conjunction with --html, sets whether the HTML output should be prepared for embedded attachments.\n  --charset CHARSET     Character set to use for the prepared HTML in the added tag. (Default: utf-8)\n  --raw                 Sets whether the output should be raw. If this is not possible, will error.\n  --rtf                 Sets whether the output should be RTF. If this is not possible, will error.\n  --allow-fallback      Tells the program to fallback to a different save type if the selected one is not possible.\n  --skip-body-not-found\n                        Skips saving the body if the body cannot be found, rather than throwing an error.\n  --zip ZIP             Path to use for saving to a zip file.\n  --save-header         Store the header in a separate file.\n  --attachments-only    Specify to only save attachments from an msg file.\n  --skip-hidden         Skips any attachment marked as hidden (usually ones embedded in the body).\n  --no-folders          Stores everything in the location specified by --out. Requires --attachments-only and is incompatible with --out-name.\n  --skip-embedded       Skips all embedded MSG files when saving attachments.\n  --extract-embedded    Extracts the embedded MSG files as MSG files instead of running their save functions.\n  --overwrite-existing  Disables filename conflict resolution code for attachments when saving a file, causing files to be overwriten if two attachments with the same filename are on an MSG file.\n  --skip-not-implemented, --skip-ni\n                        Skips any attachments that are not implemented, allowing saving of the rest of the message.\n  --out-name OUTNAME    Name to be used with saving the file output. Cannot be used if you are saving more than one file.\n  --glob, --wildcard    Interpret all paths as having wildcards. Incompatible with --out-name.\n  --ignore-rtfde        Ignores all errors thrown from RTFDE when trying to save. Useful for allowing fallback to continue when an exception happens.\n  --progress            Shows what file the program is currently working on during it's progress.\n\nTo use this in your own script, start by using:\nimport extract_msg\n\nFrom there, open the MSG file:\nmsg = extract_msg.openMsg(\"path/to/msg/file.msg\")\n\nAlternatively, if you wish to send a msg binary string instead of a file\nto the extract_msg.openMsg Method:\nmsg_raw = b'\\xd0\\xcf\\x11\\xe0\\xa1\\xb1\\x1a\\xe1\\x00 ... \\x00\\x00\\x00'\nmsg = extract_msg.openMsg(msg_raw)\n\nIf you want to override the default attachment class and use one of your\nown, simply change the code to:\nmsg = extract_msg.openMsg(\"path/to/msg/file.msg\", attachmentClass = CustomAttachmentClass)\n\nwhere CustomAttachmentClass is your custom class.\n#TODO: Finish this section\nIf you have any questions feel free to contact Destiny at arceusthe [at]\ngmail [dot] com. She is the co-owner and main developer of the project.\nIf you have issues, it would be best to get help for them by opening a\nnew github issue.\n\nError Reporting\nShould you encounter an error that has not already been reported, please\ndo the following when reporting it: * Make sure you are using the\nlatest version of extract_msg (check the version on PyPi). * State your\nPython version. * Include the code, if any, that you used. * Include a\ncopy of the traceback.\n\nSupporting The Module\nIf you'd like to donate to help support the development of the module, you can\ndonate to Destiny using one of the following services:\n\nBuy Me a Coffee\nKo-fi\nPatreon\n\n\nInstallation\nYou can install using pip:\n\nPypi\n\npip install extract-msg\n\nGithub\n\npip install git+https://github.com/TeamMsgExtractor/msg-extractor\nor you can include this in your list of python dependencies with:\n# setup.py\n\nsetup(\n    ...\n    dependency_links=['https://github.com/TeamMsgExtractor/msg-extractor/zipball/master'],\n)\nAdditionally, this module has the following extras which can be optionally\ninstalled:\n\nall: Installs all of the extras.\nmime: Installs dependency used for mimetype generation when a mimetype is not specified.\n\n\nVersioning\nThis module uses Semantic Versioning, however it has not always done so. All versions greater than or equal to 0.40.* conform successfully. As the package is currently in major version zero (0.*.*), anything MAY change at any time, as per point 4 of the SemVer specification. However, I, Destiny, am aware of the module's usage in other packages and code, and so I have taken efforts to make the versioning more reliable.\nAny change to the minor version MUST be considered a potentially breaking change, and the changelog should be checked before assuming the API will function in the way it did in the previous minor version. I do, however, try to keep the API relatively stable between minor versions, so most typical usage is likely to remain entirely unaffected.\nAny change to a patch version before the 1.0.0 release SHOULD either add functionality or have no visible difference in usage, aside from changes to the typing infomation or from a bug fix correcting the data that a component created.\nIn addition to the above conditions, it must be noted that any class, variable, function, etc., that is preceded by one or more underscores, excluding items preceded by two underscores and also proceeded by two underscores, MUST NOT be considered part of the public api. These methods may change at any time, in any way.\nI am aware of the F.A.Q. question that suggests that I should probably have pushed the module to a 1.0.0 release due to its usage in production, however there are a number of different items on the TODO list that I feel should be completed before that time. While some are simply important features I believe should exist, others are overhauls to sections of the public API that have needed careful fixing for quite a while, fixes that have slowly been happening throughout the versions. An important change was made in the 0.45.0 release which deprecates a large number of commonly used private functions and created more stable versions of them in the public API.\nAdditionally, my focus on versioning info has revealed that some of the dependencies are still in major version 0 or do not necessarily conform to Semantic Versioning. As such, these packages are more tightly constrained on what versions are considered acceptable, and careful consideration should be taken before extending the accepted range of versions.\nDetails on Semantic Versioning can be found at semver.org.\n\nTodo\nHere is a list of things that are currently on our todo list:\n\nTests (ie. unittest)\nFinish writing a usage guide\nImprove the intelligence of the saving functions\nImprove README\nCreate a wiki for advanced usage information\n\n\nCredits\nDestiny Peterson (The Elemental of Destruction) - Co-owner, principle programmer, knows more about msg files than anyone probably should.\nMatthew Walker - Original developer and co-owner.\nJP Bourget - Senior programmer, readability and organization expert, secondary manager.\nPhilippe Lagadec - Python OleFile module developer.\nJoel Kaufman - First implementations of the json and filename flags.\nDean Malmgren - First implementation of the setup.py script.\nSeamus Tuohy - Developer of the Python RTFDE module. Gave first examples of how to use the module and has worked with Destiny to ensure functionality.\nLiam - Significant reorganization and transfer of data.\nAnd thank you to everyone who has opened an issue and helped us track down those pesky bugs.\n\nExtra\nCheck out the new project msg-explorer that allows you to open MSG files and\nexplore their contents in a GUI. It is usually updated within a few days of a\nmajor release to ensure continued support. Because of this, it is recommended to\ninstall it to a separate environment (like a vitural env) to not interfere with\nyour access to the newest major version of extract-msg.\n\n\n"}, {"name": "executing", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nexecuting\nUsage\nGetting the AST node\nGetting the source code of the node\nGetting the __qualname__ of the current function\nThe Source class\nInstallation\nHow does it work?\nIs it reliable?\nWhich nodes can it identify?\nProjects that use this\nMy Projects\nProjects I've contributed to\n\n\n\n\n\nREADME.md\n\n\n\n\nexecuting\n  \nThis mini-package lets you get information about what a frame is currently doing, particularly the AST node being executed.\n\nUsage\n\nGetting the AST node\nGetting the source code of the node\nGetting the __qualname__ of the current function\nThe Source class\n\n\nInstallation\nHow does it work?\nIs it reliable?\nWhich nodes can it identify?\nLibraries that use this\n\nUsage\nGetting the AST node\nimport executing\n\nnode = executing.Source.executing(frame).node\nThen node will be an AST node (from the ast standard library module) or None if the node couldn't be identified (which may happen often and should always be checked).\nnode will always be the same instance for multiple calls with frames at the same point of execution.\nIf you have a traceback object, pass it directly to Source.executing() rather than the tb_frame attribute to get the correct node.\nGetting the source code of the node\nFor this you will need to separately install the asttokens library, then obtain an ASTTokens object:\nexecuting.Source.executing(frame).source.asttokens()\nor:\nexecuting.Source.for_frame(frame).asttokens()\nor use one of the convenience methods:\nexecuting.Source.executing(frame).text()\nexecuting.Source.executing(frame).text_range()\nGetting the __qualname__ of the current function\nexecuting.Source.executing(frame).code_qualname()\nor:\nexecuting.Source.for_frame(frame).code_qualname(frame.f_code)\nThe Source class\nEverything goes through the Source class. Only one instance of the class is created for each filename. Subclassing it to add more attributes on creation or methods is recommended. The classmethods such as executing will respect this. See the source code and docstrings for more detail.\nInstallation\npip install executing\n\nIf you don't like that you can just copy the file executing.py, there are no dependencies (but of course you won't get updates).\nHow does it work?\nSuppose the frame is executing this line:\nself.foo(bar.x)\nand in particular it's currently obtaining the attribute self.foo. Looking at the bytecode, specifically frame.f_code.co_code[frame.f_lasti], we can tell that it's loading an attribute, but it's not obvious which one. We can narrow down the statement being executed using frame.f_lineno and find the two ast.Attribute nodes representing self.foo and bar.x. How do we find out which one it is, without recreating the entire compiler in Python?\nThe trick is to modify the AST slightly for each candidate expression and observe the changes in the bytecode instructions. We change the AST to this:\n(self.foo ** 'longuniqueconstant')(bar.x)\nand compile it, and the bytecode will be almost the same but there will be two new instructions:\nLOAD_CONST 'longuniqueconstant'\nBINARY_POWER\n\nand just before that will be a LOAD_ATTR instruction corresponding to self.foo. Seeing that it's in the same position as the original instruction lets us know we've found our match.\nIs it reliable?\nYes - if it identifies a node, you can trust that it's identified the correct one. The tests are very thorough - in addition to unit tests which check various situations directly, there are property tests against a large number of files (see the filenames printed in this build) with real code. Specifically, for each file, the tests:\n\nIdentify as many nodes as possible from all the bytecode instructions in the file, and assert that they are all distinct\nFind all the nodes that should be identifiable, and assert that they were indeed identified somewhere\n\nIn other words, it shows that there is a one-to-one mapping between the nodes and the instructions that can be handled. This leaves very little room for a bug to creep in.\nFurthermore, executing checks that the instructions compiled from the modified AST exactly match the original code save for a few small known exceptions. This accounts for all the quirks and optimisations in the interpreter.\nWhich nodes can it identify?\nCurrently it works in almost all cases for the following ast nodes:\n\nCall, e.g. self.foo(bar)\nAttribute, e.g. point.x\nSubscript, e.g. lst[1]\nBinOp, e.g. x + y (doesn't include and and or)\nUnaryOp, e.g. -n (includes not but only works sometimes)\nCompare e.g. a < b (not for chains such as 0 < p < 1)\n\nThe plan is to extend to more operations in the future.\nProjects that use this\nMy Projects\n\nstack_data: Extracts data from stack frames and tracebacks, particularly to display more useful tracebacks than the default. Also uses another related library of mine: pure_eval.\nfuturecoder: Highlights the executing node in tracebacks using executing via stack_data, and provides debugging with snoop.\nsnoop: A feature-rich and convenient debugging library. Uses executing to show the operation which caused an exception and to allow the pp function to display the source of its arguments.\nheartrate: A simple real time visualisation of the execution of a Python program. Uses executing to highlight currently executing operations, particularly in each frame of the stack trace.\nsorcery: Dark magic delights in Python. Uses executing to let special callables called spells know where they're being called from.\n\nProjects I've contributed to\n\nIPython: Highlights the executing node in tracebacks using executing via stack_data.\nicecream: \ud83c\udf66 Sweet and creamy print debugging. Uses executing to identify where ic is called and print its arguments.\nfriendly_traceback: Uses stack_data and executing to pinpoint the cause of errors and provide helpful explanations.\npython-devtools: Uses executing for print debugging similar to icecream.\nsentry_sdk: Add the integration sentry_sdk.integrations.executingExecutingIntegration() to show the function __qualname__ in each frame in sentry events.\nvarname: Dark magics about variable names in python. Uses executing to find where its various magical functions like varname and nameof are called from.\n\n\n\n", "description": "Inspect Python code AST to get info about the current statement."}, {"name": "exchange-calendars", "readme": "\nexchange_calendars\n   \nA Python library for defining and querying calendars for security exchanges.\nCalendars for more than 50 exchanges available out-the-box! If you still can't find the calendar you're looking for, create a new one!\nNotice: market_prices - the new library for prices data!\nMuch of the recent development of exchange_calendars has been driven by the new market_prices library. Check it out if you like the idea of using exchange_calendars to create meaningful OHLCV datasets. Works out-the-box with freely available data!\nNotice: v4 released (June 2022)\nThe earliest stable version of v4 is 4.0.1 (not 4.0).\nWhat's changed?\nVersion 4.0.1 completes the transition to a more consistent interface across the package. The most significant changes are:\n\nSessions are now timezone-naive (previously UTC).\nSchedule columns now have timezone set as UTC (whilst the times have always been defined in terms of UTC, previously the dtype was timezone-naive).\nThe following schedule columns were renamed:\n\n'market_open' renamed as 'open'.\n'market_close' renamed as 'close'.\n\n\nDefault calendar 'side' for all calendars is now \"left\" (previously \"right\" for 24-hour calendars and \"both\" for all others). This changes the minutes that are considered trading minutes by default (see minutes tutorial for an explanation of trading minutes).\nThe 'count' parameter of sessions_window and minutes_window methods now reflects the window length (previously window length + 1).\nNew is_open_at_time calendar method to evaluate if an exchange is open as at a specific instance (as opposed to over an evaluated minute).\nThe minimum Python version supported is now 3.8 (previously 3.7).\nParameters have been renamed for some methods (list here)\nThe following methods have been deprecated:\n\nsessions_opens (use .opens[start:end])\nsessions_closes (use .closes[start:end])\n\n\nMethods deprecated in 3.4 have been removed (lists here and here)\n\nSee the 4.0 release todo for a full list of changes and corresponding PRs.\nPlease offer any feedback at the v4 discussion.\nInstallation\n$ pip install exchange_calendars\n\nQuick Start\nimport exchange_calendars as xcals\n\nGet a list of available calendars:\n>>> xcals.get_calendar_names(include_aliases=False)[5:10]\n['CMES', 'IEPA', 'XAMS', 'XASX', 'XBKK']\n\nGet a calendar:\n>>> xnys = xcals.get_calendar(\"XNYS\")  # New York Stock Exchange\n>>> xhkg = xcals.get_calendar(\"XHKG\")  # Hong Kong Stock Exchange\n\nQuery the schedule:\n>>> xhkg.schedule.loc[\"2021-12-29\":\"2022-01-04\"]\n\n\n\n\n\n\n\n\n\n   open break_start break_end close     2021-12-29 2021-12-29 01:30:00+00:00 2021-12-29 04:00:00+00:00 2021-12-29 05:00:00+00:00 2021-12-29 08:00:00+00:00   2021-12-30 2021-12-30 01:30:00+00:00 2021-12-30 04:00:00+00:00 2021-12-30 05:00:00+00:00 2021-12-30 08:00:00+00:00   2021-12-31 2021-12-31 01:30:00+00:00 NaT NaT 2021-12-31 04:00:00+00:00   2022-01-03 2022-01-03 01:30:00+00:00 2022-01-03 04:00:00+00:00 2022-01-03 05:00:00+00:00 2022-01-03 08:00:00+00:00   2022-01-04 2022-01-04 01:30:00+00:00 2022-01-04 04:00:00+00:00 2022-01-04 05:00:00+00:00 2022-01-04 08:00:00+00:00  \n\nWorking with sessions\n>>> xnys.is_session(\"2022-01-01\")\nFalse\n\n>>> xnys.sessions_in_range(\"2022-01-01\", \"2022-01-11\")\nDatetimeIndex(['2022-01-03', '2022-01-04', '2022-01-05', '2022-01-06',\n               '2022-01-07', '2022-01-10', '2022-01-11'],\n              dtype='datetime64[ns]', freq='C')\n\n>>> xnys.sessions_window(\"2022-01-03\", 7)\nDatetimeIndex(['2022-01-03', '2022-01-04', '2022-01-05', '2022-01-06',\n               '2022-01-07', '2022-01-10', '2022-01-11'],\n              dtype='datetime64[ns]', freq='C')\n\n>>> xnys.date_to_session(\"2022-01-01\", direction=\"next\")\nTimestamp('2022-01-03 00:00:00', freq='C')\n\n>>> xnys.previous_session(\"2022-01-11\")\nTimestamp('2022-01-10 00:00:00', freq='C')\n\n>>> xhkg.trading_index(\n...     \"2021-12-30\", \"2021-12-31\", period=\"90T\", force=True\n... )\nIntervalIndex([[2021-12-30 01:30:00, 2021-12-30 03:00:00), [2021-12-30 03:00:00, 2021-12-30 04:00:00), [2021-12-30 05:00:00, 2021-12-30 06:30:00), [2021-12-30 06:30:00, 2021-12-30 08:00:00), [2021-12-31 01:30:00, 2021-12-31 03:00:00), [2021-12-31 03:00:00, 2021-12-31 04:00:00)], dtype='interval[datetime64[ns, UTC], left]')\n\nSee the sessions tutorial for a deeper dive into sessions.\nWorking with minutes\n>>> xhkg.session_minutes(\"2022-01-03\")\nDatetimeIndex(['2022-01-03 01:30:00+00:00', '2022-01-03 01:31:00+00:00',\n               '2022-01-03 01:32:00+00:00', '2022-01-03 01:33:00+00:00',\n               '2022-01-03 01:34:00+00:00', '2022-01-03 01:35:00+00:00',\n               '2022-01-03 01:36:00+00:00', '2022-01-03 01:37:00+00:00',\n               '2022-01-03 01:38:00+00:00', '2022-01-03 01:39:00+00:00',\n               ...\n               '2022-01-03 07:50:00+00:00', '2022-01-03 07:51:00+00:00',\n               '2022-01-03 07:52:00+00:00', '2022-01-03 07:53:00+00:00',\n               '2022-01-03 07:54:00+00:00', '2022-01-03 07:55:00+00:00',\n               '2022-01-03 07:56:00+00:00', '2022-01-03 07:57:00+00:00',\n               '2022-01-03 07:58:00+00:00', '2022-01-03 07:59:00+00:00'],\n              dtype='datetime64[ns, UTC]', length=330, freq=None)\n\n>>> mins = [ \"2022-01-03 \" + tm for tm in [\"01:29\", \"01:30\", \"04:20\", \"07:59\", \"08:00\"] ]\n>>> [ xhkg.is_trading_minute(minute) for minute in mins ]\n[False, True, False, True, False]  # by default minutes are closed on the left side\n\n>>> xhkg.is_break_minute(\"2022-01-03 04:20\")\nTrue\n\n>>> xhkg.previous_close(\"2022-01-03 08:10\")\nTimestamp('2022-01-03 08:00:00+0000', tz='UTC')\n\n>>> xhkg.previous_minute(\"2022-01-03 08:10\")\nTimestamp('2022-01-03 07:59:00+0000', tz='UTC')\n\nCheck out the minutes tutorial for a deeper dive that includes an explanation of the concept of 'minutes' and how the \"side\" option determines which minutes are treated as trading minutes.\nTutorials\n\nsessions.ipynb - all things sessions.\nminutes.ipynb - all things minutes. Don't miss this one!\ncalendar_properties.ipynb - calendar constrution and a walk through the schedule and all other calendar properties.\ncalendar_methods.ipynb - a walk through all the methods available to interrogate a calendar.\ntrading_index.ipynb - a method that warrants a tutorial all of its own.\n\nHopefully you'll find that exchange_calendars has the method you need to get the information you want. If it doesn't, either PR it or raise an issue and let us know!\nCommand Line Usage\nPrint a unix-cal like calendar straight from the command line (holidays are indicated by brackets)...\necal XNYS 2020\n\n                                        2020\n        January                        February                        March\nSu  Mo  Tu  We  Th  Fr  Sa     Su  Mo  Tu  We  Th  Fr  Sa     Su  Mo  Tu  We  Th  Fr  Sa\n            [ 1]  2   3 [ 4]                           [ 1]\n[ 5]  6   7   8   9  10 [11]   [ 2]  3   4   5   6   7 [ 8]   [ 1]  2   3   4   5   6 [ 7]\n[12] 13  14  15  16  17 [18]   [ 9] 10  11  12  13  14 [15]   [ 8]  9  10  11  12  13 [14]\n[19][20] 21  22  23  24 [25]   [16][17] 18  19  20  21 [22]   [15] 16  17  18  19  20 [21]\n[26] 27  28  29  30  31        [23] 24  25  26  27  28 [29]   [22] 23  24  25  26  27 [28]\n                                                              [29] 30  31\n\n        April                           May                            June\nSu  Mo  Tu  We  Th  Fr  Sa     Su  Mo  Tu  We  Th  Fr  Sa     Su  Mo  Tu  We  Th  Fr  Sa\n              1   2   3 [ 4]                         1 [ 2]         1   2   3   4   5 [ 6]\n[ 5]  6   7   8   9 [10][11]   [ 3]  4   5   6   7   8 [ 9]   [ 7]  8   9  10  11  12 [13]\n[12] 13  14  15  16  17 [18]   [10] 11  12  13  14  15 [16]   [14] 15  16  17  18  19 [20]\n[19] 20  21  22  23  24 [25]   [17] 18  19  20  21  22 [23]   [21] 22  23  24  25  26 [27]\n[26] 27  28  29  30            [24][25] 26  27  28  29 [30]   [28] 29  30\n                               [31]\n\n            July                          August                       September\nSu  Mo  Tu  We  Th  Fr  Sa     Su  Mo  Tu  We  Th  Fr  Sa     Su  Mo  Tu  We  Th  Fr  Sa\n              1   2 [ 3][ 4]                           [ 1]             1   2   3   4 [ 5]\n[ 5]  6   7   8   9  10 [11]   [ 2]  3   4   5   6   7 [ 8]   [ 6][ 7]  8   9  10  11 [12]\n[12] 13  14  15  16  17 [18]   [ 9] 10  11  12  13  14 [15]   [13] 14  15  16  17  18 [19]\n[19] 20  21  22  23  24 [25]   [16] 17  18  19  20  21 [22]   [20] 21  22  23  24  25 [26]\n[26] 27  28  29  30  31        [23] 24  25  26  27  28 [29]   [27] 28  29  30\n                               [30] 31\n\n        October                        November                       December\nSu  Mo  Tu  We  Th  Fr  Sa     Su  Mo  Tu  We  Th  Fr  Sa     Su  Mo  Tu  We  Th  Fr  Sa\n                  1   2 [ 3]                                            1   2   3   4 [ 5]\n[ 4]  5   6   7   8   9 [10]   [ 1]  2   3   4   5   6 [ 7]   [ 6]  7   8   9  10  11 [12]\n[11] 12  13  14  15  16 [17]   [ 8]  9  10  11  12  13 [14]   [13] 14  15  16  17  18 [19]\n[18] 19  20  21  22  23 [24]   [15] 16  17  18  19  20 [21]   [20] 21  22  23  24 [25][26]\n[25] 26  27  28  29  30 [31]   [22] 23  24  25 [26] 27 [28]   [27] 28  29  30  31\n                               [29] 30\n\necal XNYS 1 2020\n\n        January 2020\nSu  Mo  Tu  We  Th  Fr  Sa\n            [ 1]  2   3 [ 4]\n[ 5]  6   7   8   9  10 [11]\n[12] 13  14  15  16  17 [18]\n[19][20] 21  22  23  24 [25]\n[26] 27  28  29  30  31\n\nFrequently Asked Questions\nHow can I create a new calendar?\nFirst off, make sure the calendar you're after hasn't already been defined; exchange calendars comes with over 50 pre-defined calendars, including major security exchanges.\nIf you can't find what you're after, a custom calendar can be created as a subclass of ExchangeCalendar. This workflow describes the process to add a new calendar to exchange_calendars. Just follow the relevant parts.\nTo access the new calendar via get_calendar call either xcals.register_calendar or xcals.register_calendar_type to register, respectively, a specific calendar instance or a calendar factory (i.e. the subclass).\nCan I contribute a new calendar to exchange calendars?\nYes please! The workflow can be found here.\n<calendar> is missing a holiday, has a wrong time, should have a break etc...\nAll of the exchange calendars are maintained by user contributions. If a calendar you care about needs revising, please open a PR - that's how this thing works! (Never contributed to a project before and it all seems a bit daunting? Check this out and don't look back!)\nYou'll find the workflow to modify an existing calendar here.\nWhat times are considered open and closed?\nexchange_calendars attempts to be broadly useful by considering an exchange to be open only during periods of regular trading. During any pre-trading, post-trading or auction period the exchange is treated as closed. An exchange is also treated as closed during any observed lunch break.\nSee the minutes tutorial for a detailed explanation of which minutes an exchange is considered open over. If you previously used trading_calendars, or exchange_calendars prior to release 3.4, then this is the place to look for answers to questions of how the definition of trading minutes has changed over time (and is now stable and flexible!).\nCalendars\n\n\n\nExchange\nISO Code\nCountry\nVersion Added\nExchange Website (English)\n\n\n\n\nNew York Stock Exchange\nXNYS\nUSA\n1.0\nhttps://www.nyse.com/index\n\n\nCBOE Futures\nXCBF\nUSA\n1.0\nhttps://markets.cboe.com/us/futures/overview/\n\n\nChicago Mercantile Exchange\nCMES\nUSA\n1.0\nhttps://www.cmegroup.com/\n\n\nICE US\nIEPA\nUSA\n1.0\nhttps://www.theice.com/index\n\n\nToronto Stock Exchange\nXTSE\nCanada\n1.0\nhttps://www.tsx.com/\n\n\nBMF Bovespa\nBVMF\nBrazil\n1.0\nhttp://www.b3.com.br/en_us/\n\n\nLondon Stock Exchange\nXLON\nEngland\n1.0\nhttps://www.londonstockexchange.com/\n\n\nEuronext Amsterdam\nXAMS\nNetherlands\n1.2\nhttps://www.euronext.com/en/regulation/amsterdam\n\n\nEuronext Brussels\nXBRU\nBelgium\n1.2\nhttps://www.euronext.com/en/regulation/brussels\n\n\nEuronext Lisbon\nXLIS\nPortugal\n1.2\nhttps://www.euronext.com/en/regulation/lisbon\n\n\nEuronext Paris\nXPAR\nFrance\n1.2\nhttps://www.euronext.com/en/regulation/paris\n\n\nFrankfurt Stock Exchange\nXFRA\nGermany\n1.2\nhttp://en.boerse-frankfurt.de/\n\n\nSIX Swiss Exchange\nXSWX\nSwitzerland\n1.2\nhttps://www.six-group.com/en/home.html\n\n\nTokyo Stock Exchange\nXTKS\nJapan\n1.2\nhttps://www.jpx.co.jp/english/\n\n\nAustrialian Securities Exchange\nXASX\nAustralia\n1.3\nhttps://www.asx.com.au/\n\n\nBolsa de Madrid\nXMAD\nSpain\n1.3\nhttps://www.bolsamadrid.es\n\n\nBorsa Italiana\nXMIL\nItaly\n1.3\nhttps://www.borsaitaliana.it\n\n\nNew Zealand Exchange\nXNZE\nNew Zealand\n1.3\nhttps://www.nzx.com/\n\n\nWiener Borse\nXWBO\nAustria\n1.3\nhttps://www.wienerborse.at/en/\n\n\nHong Kong Stock Exchange\nXHKG\nHong Kong\n1.3\nhttps://www.hkex.com.hk/?sc_lang=en\n\n\nCopenhagen Stock Exchange\nXCSE\nDenmark\n1.4\nhttp://www.nasdaqomxnordic.com/\n\n\nHelsinki Stock Exchange\nXHEL\nFinland\n1.4\nhttp://www.nasdaqomxnordic.com/\n\n\nStockholm Stock Exchange\nXSTO\nSweden\n1.4\nhttp://www.nasdaqomxnordic.com/\n\n\nOslo Stock Exchange\nXOSL\nNorway\n1.4\nhttps://www.oslobors.no/ob_eng/\n\n\nIrish Stock Exchange\nXDUB\nIreland\n1.4\nhttp://www.ise.ie/\n\n\nBombay Stock Exchange\nXBOM\nIndia\n1.5\nhttps://www.bseindia.com\n\n\nSingapore Exchange\nXSES\nSingapore\n1.5\nhttps://www.sgx.com\n\n\nShanghai Stock Exchange\nXSHG\nChina\n1.5\nhttp://english.sse.com.cn\n\n\nKorea Exchange\nXKRX\nSouth Korea\n1.6\nhttp://global.krx.co.kr\n\n\nIceland Stock Exchange\nXICE\nIceland\n1.7\nhttp://www.nasdaqomxnordic.com/\n\n\nPoland Stock Exchange\nXWAR\nPoland\n1.9\nhttp://www.gpw.pl\n\n\nSantiago Stock Exchange\nXSGO\nChile\n1.9\nhttps://www.bolsadesantiago.com/\n\n\nColombia Securities Exchange\nXBOG\nColombia\n1.9\nhttps://www.bvc.com.co/nueva/https://www.bvc.com.co/nueva/\n\n\nMexican Stock Exchange\nXMEX\nMexico\n1.9\nhttps://www.bmv.com.mx\n\n\nLima Stock Exchange\nXLIM\nPeru\n1.9\nhttps://www.bvl.com.pe\n\n\nPrague Stock Exchange\nXPRA\nCzech Republic\n1.9\nhttps://www.pse.cz/en/\n\n\nBudapest Stock Exchange\nXBUD\nHungary\n1.10\nhttps://bse.hu/\n\n\nAthens Stock Exchange\nASEX\nGreece\n1.10\nhttp://www.helex.gr/\n\n\nIstanbul Stock Exchange\nXIST\nTurkey\n1.10\nhttps://www.borsaistanbul.com/en/\n\n\nJohannesburg Stock Exchange\nXJSE\nSouth Africa\n1.10\nhttps://www.jse.co.za/z\n\n\nMalaysia Stock Exchange\nXKLS\nMalaysia\n1.11\nhttp://www.bursamalaysia.com/market/\n\n\nMoscow Exchange\nXMOS\nRussia\n1.11\nhttps://www.moex.com/en/\n\n\nPhilippine Stock Exchange\nXPHS\nPhilippines\n1.11\nhttps://www.pse.com.ph/\n\n\nStock Exchange of Thailand\nXBKK\nThailand\n1.11\nhttps://www.set.or.th/set/mainpage.do?language=en&country=US\n\n\nIndonesia Stock Exchange\nXIDX\nIndonesia\n1.11\nhttps://www.idx.co.id/\n\n\nTaiwan Stock Exchange Corp.\nXTAI\nTaiwan\n1.11\nhttps://www.twse.com.tw/en/\n\n\nBuenos Aires Stock Exchange\nXBUE\nArgentina\n1.11\nhttps://www.bcba.sba.com.ar/\n\n\nPakistan Stock Exchange\nXKAR\nPakistan\n1.11\nhttps://www.psx.com.pk/\n\n\nXetra\nXETR\nGermany\n2.1\nhttps://www.xetra.com/\n\n\nTel Aviv Stock Exchange\nXTAE\nIsrael\n2.1\nhttps://www.tase.co.il/\n\n\nAstana International Exchange\nAIXK\nKazakhstan\n3.2\nhttps://www.aix.kz/\n\n\nBucharest Stock Exchange\nXBSE\nRomania\n3.2\nhttps://www.bvb.ro/\n\n\nSaudi Stock Exchange\nXSAU\nSaudi Arabia\n4.2\nhttps://www.saudiexchange.sa/\n\n\n\n\nNote that exchange calendars are defined by their ISO-10383 market identifier code.\n\nDeprecations and Renaming\nMethods deprecated in 4.0\n\n\n\nDeprecated method\nReason\n\n\n\n\nsessions_closes\nuse .closes[start:end]\n\n\nsessions_opens\nuse .opens[start:end]\n\n\n\nMethods with a parameter renamed in 4.0\n\n\n\nMethod\n\n\n\n\nis_session\n\n\nis_open_on_minute\n\n\nminutes_in_range\n\n\nminutes_window\n\n\nnext_close\n\n\nnext_minute\n\n\nnext_open\n\n\nprevious_close\n\n\nprevious_minute\n\n\nprevious_open\n\n\nsession_break_end\n\n\nsession_break_start\n\n\nsession_close\n\n\nsession_open\n\n\nsessions_in_range\n\n\nsessions_window\n\n\n\nMethods renamed in version 3.4 and removed in 4.0\n\n\n\nPrevious name\nNew name\n\n\n\n\nall_minutes\nminutes\n\n\nall_minutes_nanos\nminutes_nanos\n\n\nall_sessions\nsessions\n\n\nbreak_start_and_end_for_session\nsession_break_start_end\n\n\ndate_to_session_label\ndate_to_session\n\n\nfirst_trading_minute\nfirst_minute\n\n\nfirst_trading_session\nfirst_session\n\n\nhas_breaks\nsessions_has_break\n\n\nlast_trading_minute\nlast_minute\n\n\nlast_trading_session\nlast_session\n\n\nnext_session_label\nnext_session\n\n\nopen_and_close_for_session\nsession_open_close\n\n\nprevious_session_label\nprevious_session\n\n\nmarket_break_ends_nanos\nbreak_ends_nanos\n\n\nmarket_break_starts_nanos\nbreak_starts_nanos\n\n\nmarket_closes_nanos\ncloses_nanos\n\n\nmarket_opens_nanos\nopens_nanos\n\n\nminute_index_to_session_labels\nminutes_to_sessions\n\n\nminute_to_session_label\nminute_to_session\n\n\nminutes_count_for_sessions_in_range\nsessions_minutes_count\n\n\nminutes_for_session\nsession_minutes\n\n\nminutes_for_sessions_in_range\nsessions_minutes\n\n\nsession_closes_in_range\nsessions_closes\n\n\nsession_distance\nsessions_distance\n\n\nsession_opens_in_range\nsessions_opens\n\n\n\nOther methods deprecated in 3.4 and removed in 4.0\n\n\n\nRemoved Method\n\n\n\n\nexecution_minute_for_session\n\n\nexecution_minute_for_sessions_in_range\n\n\nexecution_time_from_close\n\n\nexecution_time_from_open\n\n\n\n"}, {"name": "exceptiongroup", "readme": "\n\n\nThis is a backport of the BaseExceptionGroup and ExceptionGroup classes from\nPython 3.11.\nIt contains the following:\n\nThe  exceptiongroup.BaseExceptionGroup and exceptiongroup.ExceptionGroup\nclasses\nA utility function (exceptiongroup.catch()) for catching exceptions possibly\nnested in an exception group\nPatches to the TracebackException class that properly formats exception groups\n(installed on import)\nAn exception hook that handles formatting of exception groups through\nTracebackException (installed on import)\nSpecial versions of some of the functions from the traceback module, modified to\ncorrectly handle exception groups even when monkey patching is disabled, or blocked by\nanother custom exception hook:\n\ntraceback.format_exception()\ntraceback.format_exception_only()\ntraceback.print_exception()\ntraceback.print_exc()\n\n\n\nIf this package is imported on Python 3.11 or later, the built-in implementations of the\nexception group classes are used instead, TracebackException is not monkey patched\nand the exception hook won\u2019t be installed.\nSee the standard library documentation for more information on exception groups.\n\nCatching exceptions\nDue to the lack of the except* syntax introduced by PEP 654 in earlier Python\nversions, you need to use exceptiongroup.catch() to catch exceptions that are\npotentially nested inside an exception group. This function returns a context manager\nthat calls the given handler for any exceptions matching the sole argument.\nThe argument to catch() must be a dict (or any Mapping) where each key is either\nan exception class or an iterable of exception classes. Each value must be a callable\nthat takes a single positional argument. The handler will be called at most once, with\nan exception group as an argument which will contain all the exceptions that are any\nof the given types, or their subclasses. The exception group may contain nested groups\ncontaining more matching exceptions.\nThus, the following Python 3.11+ code:\ntry:\n    ...\nexcept* (ValueError, KeyError) as excgroup:\n    for exc in excgroup.exceptions:\n        print('Caught exception:', type(exc))\nexcept* RuntimeError:\n    print('Caught runtime error')\nwould be written with this backport like this:\nfrom exceptiongroup import ExceptionGroup, catch\n\ndef value_key_err_handler(excgroup: ExceptionGroup) -> None:\n    for exc in excgroup.exceptions:\n        print('Caught exception:', type(exc))\n\ndef runtime_err_handler(exc: ExceptionGroup) -> None:\n    print('Caught runtime error')\n\nwith catch({\n    (ValueError, KeyError): value_key_err_handler,\n    RuntimeError: runtime_err_handler\n}):\n    ...\nNOTE: Just like with except*, you cannot handle BaseExceptionGroup or\nExceptionGroup with catch().\n\n\nNotes on monkey patching\nTo make exception groups render properly when an unhandled exception group is being\nprinted out, this package does two things when it is imported on any Python version\nearlier than 3.11:\n\nThe  traceback.TracebackException class is monkey patched to store extra\ninformation about exception groups (in __init__()) and properly format them (in\nformat())\nAn exception hook is installed at sys.excepthook, provided that no other hook is\nalready present. This hook causes the exception to be formatted using\ntraceback.TracebackException rather than the built-in rendered.\n\nIf sys.exceptionhook is found to be set to something else than the default when\nexceptiongroup is imported, no monkeypatching is done at all.\nTo prevent the exception hook and patches from being installed, set the environment\nvariable EXCEPTIONGROUP_NO_PATCH to 1.\n\nFormatting exception groups\nNormally, the monkey patching applied by this library on import will cause exception\ngroups to be printed properly in tracebacks. But in cases when the monkey patching is\nblocked by a third party exception hook, or monkey patching is explicitly disabled,\nyou can still manually format exceptions using the special versions of the traceback\nfunctions, like format_exception(), listed at the top of this page. They work just\nlike their counterparts in the traceback module, except that they use a separately\npatched subclass of TracebackException to perform the rendering.\nParticularly in cases where a library installs its own exception hook, it is recommended\nto use these special versions to do the actual formatting of exceptions/tracebacks.\n\n\n", "description": "Backport of Python 3.11 exception groups."}, {"name": "et-xmlfile", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n"}, {"name": "entrypoints", "readme": "\n\n\n\nREADME.rst\n\n\n\n\nThis package is in maintenance-only mode. New code should use the\nimportlib.metadata module\nin the Python standard library to find and load entry points.\nEntry points are a way for Python packages to advertise objects with some\ncommon interface. The most common examples are console_scripts entry points,\nwhich define shell commands by identifying a Python function to run.\nGroups of entry points, such as console_scripts, point to objects with\nsimilar interfaces. An application might use a group to find its plugins, or\nmultiple groups if it has different kinds of plugins.\nThe entrypoints module contains functions to find and load entry points.\nYou can install it from PyPI with pip install entrypoints.\nTo advertise entry points when distributing a package, see\nentry_points in the Python Packaging User Guide.\nThe pkg_resources module distributed with setuptools provides a way to\ndiscover entrypoints as well, but it contains other functionality unrelated to\nentrypoint discovery, and it does a lot of work at import time.  Merely\nimporting pkg_resources causes it to scan the files of all installed\npackages. Thus, in environments where a large number of packages are installed,\nimporting pkg_resources can be very slow (several seconds).\nBy contrast, entrypoints is focused solely on entrypoint discovery and it\nis faster. Importing entrypoints does not scan anything, and getting a\ngiven entrypoint group performs a more focused scan.\nWhen there are multiple versions of the same distribution in different\ndirectories on sys.path, entrypoints follows the rule that the first\none wins.  In most cases, this follows the logic of imports.  Similarly,\nEntrypoints relies on pip to ensure that only one .dist-info or\n.egg-info directory exists for each installed package.  There is no reliable\nway to pick which of several .dist-info folders accurately relates to the\nimportable modules.\n\n\n", "description": "Discover and load entry points from installed packages."}, {"name": "email-validator", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nemail-validator: Validate Email Addresses\nInstallation\nQuick Start\nUsage\nOverview\nOptions\nDNS timeout and cache\nTest addresses\nInternationalized email addresses\nInternationalized domain names (IDN)\nInternationalized local parts\nIf you know ahead of time that SMTPUTF8 is not supported by your mail submission stack\nNormalization\nUnicode Normalization\nOther Normalization\nExamples\nReturn value\nAssumptions\nTesting\nFor Project Maintainers\n\n\n\n\n\nREADME.md\n\n\n\n\nemail-validator: Validate Email Addresses\nA robust email address syntax and deliverability validation library for\nPython 3.7+ by Joshua Tauberer.\nThis library validates that a string is of the form name@example.com\nand optionally checks that the domain name is set up to receive email.\nThis is the sort of validation you would want when you are identifying\nusers by their email address like on a registration/login form (but not\nnecessarily for composing an email message, see below).\nKey features:\n\nChecks that an email address has the correct syntax --- good for\nregistration/login forms or other uses related to identifying users.\nGives friendly English error messages when validation fails that you\ncan display to end-users.\nChecks deliverability (optional): Does the domain name resolve?\n(You can override the default DNS resolver to add query caching.)\nSupports internationalized domain names and internationalized local parts.\nRejects addresses with unsafe Unicode characters, obsolete email address\nsyntax that you'd find unexpected, special use domain names like\n@localhost, and domains without a dot by default. This is an\nopinionated library!\nNormalizes email addresses (important for internationalized\nand quoted-string addresses! see below).\nPython type annotations are used.\n\nThis is an opinionated library. You should definitely also consider using\nthe less-opinionated pyIsEmail and\nflanker if they are better for your\nuse case.\n\nView the CHANGELOG / Release Notes for the version history of changes in the library. Occasionally this README is ahead of the latest published package --- see the CHANGELOG for details.\n\nInstallation\nThis package is on PyPI, so:\npip install email-validator\n(You might need to use pip3 depending on your local environment.)\nQuick Start\nIf you're validating a user's email address before creating a user\naccount in your application, you might do this:\nfrom email_validator import validate_email, EmailNotValidError\n\nemail = \"my+address@example.org\"\n\ntry:\n\n  # Check that the email address is valid. Turn on check_deliverability\n  # for first-time validations like on account creation pages (but not\n  # login pages).\n  emailinfo = validate_email(email, check_deliverability=False)\n\n  # After this point, use only the normalized form of the email address,\n  # especially before going to a database query.\n  email = emailinfo.normalized\n\nexcept EmailNotValidError as e:\n\n  # The exception message is human-readable explanation of why it's\n  # not a valid (or deliverable) email address.\n  print(str(e))\nThis validates the address and gives you its normalized form. You should\nput the normalized form in your database and always normalize before\nchecking if an address is in your database. When using this in a login form,\nset check_deliverability to False to avoid unnecessary DNS queries.\nUsage\nOverview\nThe module provides a function validate_email(email_address) which\ntakes an email address and:\n\nRaises a EmailNotValidError with a helpful, human-readable error\nmessage explaining why the email address is not valid, or\nReturns an object with a normalized form of the email address (which\nyou should use!) and other information about it.\n\nWhen an email address is not valid, validate_email raises either an\nEmailSyntaxError if the form of the address is invalid or an\nEmailUndeliverableError if the domain name fails DNS checks. Both\nexception classes are subclasses of EmailNotValidError, which in turn\nis a subclass of ValueError.\nBut when an email address is valid, an object is returned containing\na normalized form of the email address (which you should use!) and\nother information.\nThe validator doesn't, by default, permit obsoleted forms of email addresses\nthat no one uses anymore even though they are still valid and deliverable, since\nthey will probably give you grief if you're using email for login. (See\nlater in the document about how to allow some obsolete forms.)\nThe validator optionally checks that the domain name in the email address has\na DNS MX record indicating that it can receive email. (Except a Null MX record.\nIf there is no MX record, a fallback A/AAAA-record is permitted, unless\na reject-all SPF record is present.) DNS is slow and sometimes unavailable or\nunreliable, so consider whether these checks are useful for your use case and\nturn them off if they aren't.\nThere is nothing to be gained by trying to actually contact an SMTP server, so\nthat's not done here. For privacy, security, and practicality reasons, servers\nare good at not giving away whether an address is\ndeliverable or not: email addresses that appear to accept mail at first\ncan bounce mail after a delay, and bounced mail may indicate a temporary\nfailure of a good email address (sometimes an intentional failure, like\ngreylisting).\nOptions\nThe validate_email function also accepts the following keyword arguments\n(defaults are as shown below):\ncheck_deliverability=True: If true, DNS queries are made to check that the domain name in the email address (the part after the @-sign) can receive mail, as described above. Set to False to skip this DNS-based check. It is recommended to pass False when performing validation for login pages (but not account creation pages) since re-validation of a previously validated domain in your database by querying DNS at every login is probably undesirable. You can also set email_validator.CHECK_DELIVERABILITY to False to turn this off for all calls by default.\ndns_resolver=None: Pass an instance of dns.resolver.Resolver to control the DNS resolver including setting a timeout and a cache. The caching_resolver function shown below is a helper function to construct a dns.resolver.Resolver with a LRUCache. Reuse the same resolver instance across calls to validate_email to make use of the cache.\ntest_environment=False: If True, DNS-based deliverability checks are disabled and  test and **.test domain names are permitted (see below). You can also set email_validator.TEST_ENVIRONMENT to True to turn it on for all calls by default.\nallow_smtputf8=True: Set to False to prohibit internationalized addresses that would\nrequire the\nSMTPUTF8 extension. You can also set email_validator.ALLOW_SMTPUTF8 to False to turn it off for all calls by default.\nallow_quoted_local=False: Set to True to allow obscure and potentially problematic email addresses in which the part of the address before the @-sign contains spaces, @-signs, or other surprising characters when the local part is surrounded in quotes (so-called quoted-string local parts). In the object returned by validate_email, the normalized local part removes any unnecessary backslash-escaping and even removes the surrounding quotes if the address would be valid without them. You can also set email_validator.ALLOW_QUOTED_LOCAL to True to turn this on for all calls by default.\nallow_domain_literal=False: Set to True to allow bracketed IPv4 and \"IPv6:\"-prefixd IPv6 addresses in the domain part of the email address. No deliverability checks are performed for these addresses. In the object returned by validate_email, the normalized domain will use the condensed IPv6 format, if applicable. The object's domain_address attribute will hold the parsed ipaddress.IPv4Address or ipaddress.IPv6Address object if applicable. You can also set email_validator.ALLOW_DOMAIN_LITERAL to True to turn this on for all calls by default.\nallow_empty_local=False: Set to True to allow an empty local part (i.e.\n@example.com), e.g. for validating Postfix aliases.\nDNS timeout and cache\nWhen validating many email addresses or to control the timeout (the default is 15 seconds), create a caching dns.resolver.Resolver to reuse in each call. The caching_resolver function returns one easily for you:\nfrom email_validator import validate_email, caching_resolver\n\nresolver = caching_resolver(timeout=10)\n\nwhile True:\n  validate_email(email, dns_resolver=resolver)\nTest addresses\nThis library rejects email addresess that use the Special Use Domain Names invalid, localhost, test, and some others by raising EmailSyntaxError. This is to protect your system from abuse: You probably don't want a user to be able to cause an email to be sent to localhost (although they might be able to still do so via a malicious MX record). However, in your non-production test environments you may want to use @test or @myname.test email addresses. There are three ways you can allow this:\n\nAdd test_environment=True to the call to validate_email (see above).\nSet email_validator.TEST_ENVIRONMENT to True globally.\nRemove the special-use domain name that you want to use from email_validator.SPECIAL_USE_DOMAIN_NAMES, e.g.:\n\nimport email_validator\nemail_validator.SPECIAL_USE_DOMAIN_NAMES.remove(\"test\")\nIt is tempting to use @example.com/net/org in tests. They are not in this library's SPECIAL_USE_DOMAIN_NAMES list so you can, but shouldn't, use them. These domains are reserved to IANA for use in documentation so there is no risk of accidentally emailing someone at those domains. But beware that this library will nevertheless reject these domain names if DNS-based deliverability checks are not disabled because these domains do not resolve to domains that accept email. In tests, consider using your own domain name or @test or @myname.test instead.\nInternationalized email addresses\nThe email protocol SMTP and the domain name system DNS have historically\nonly allowed English (ASCII) characters in email addresses and domain names,\nrespectively. Each has adapted to internationalization in a separate\nway, creating two separate aspects to email address\ninternationalization.\nInternationalized domain names (IDN)\nThe first is internationalized domain names (RFC\n5891), a.k.a IDNA 2008. The DNS\nsystem has not been updated with Unicode support. Instead, internationalized\ndomain names are converted into a special IDNA ASCII \"Punycode\"\nform starting with xn--. When an email address has non-ASCII\ncharacters in its domain part, the domain part is replaced with its IDNA\nASCII equivalent form in the process of mail transmission. Your mail\nsubmission library probably does this for you transparently. (Compliance\naround the web is not very good though.) This library conforms to IDNA 2008\nusing the idna module by Kim Davies.\nInternationalized local parts\nThe second sort of internationalization is internationalization in the\nlocal part of the address (before the @-sign). In non-internationalized\nemail addresses, only English letters, numbers, and some punctuation\n(._!#$%&'^``*+-=~/?{|}) are allowed. In internationalized email address\nlocal parts, a wider range of Unicode characters are allowed.\nA surprisingly large number of Unicode characters are not safe to display,\nespecially when the email address is concatenated with other text, so this\nlibrary tries to protect you by not permitting resvered, non-, private use,\nformatting (which can be used to alter the display order of characters),\nwhitespace, and control characters, and combining characters\nas the first character of the local part and the domain name (so that they\ncannot combine with something outside of the email address string or with\nthe @-sign). See https://qntm.org/safe and https://trojansource.codes/\nfor relevant prior work. (Other than whitespace, these are checks that\nyou should be applying to nearly all user inputs in a security-sensitive\ncontext.)\nThese character checks are performed after Unicode normalization (see below),\nso you are only fully protected if you replace all user-provided email addresses\nwith the normalized email address string returned by this library. This does not\nguard against the well known problem that many Unicode characters look alike\n(or are identical), which can be used to fool humans reading displayed text.\nEmail addresses with these non-ASCII characters require that your mail\nsubmission library and the mail servers along the route to the destination,\nincluding your own outbound mail server, all support the\nSMTPUTF8 (RFC 6531) extension.\nSupport for SMTPUTF8 varies. See the allow_smtputf8 parameter.\nIf you know ahead of time that SMTPUTF8 is not supported by your mail submission stack\nBy default all internationalized forms are accepted by the validator.\nBut if you know ahead of time that SMTPUTF8 is not supported by your\nmail submission stack, then you must filter out addresses that require\nSMTPUTF8 using the allow_smtputf8=False keyword argument (see above).\nThis will cause the validation function to raise a EmailSyntaxError if\ndelivery would require SMTPUTF8. That's just in those cases where\nnon-ASCII characters appear before the @-sign. If you do not set\nallow_smtputf8=False, you can also check the value of the smtputf8\nfield in the returned object.\nIf your mail submission library doesn't support Unicode at all --- even\nin the domain part of the address --- then immediately prior to mail\nsubmission you must replace the email address with its ASCII-ized form.\nThis library gives you back the ASCII-ized form in the ascii_email\nfield in the returned object, which you can get like this:\nemailinfo = validate_email(email, allow_smtputf8=False)\nemail = emailinfo.ascii_email\nThe local part is left alone (if it has internationalized characters\nallow_smtputf8=False will force validation to fail) and the domain\npart is converted to IDNA ASCII.\n(You probably should not do this at account creation time so you don't\nchange the user's login information without telling them.)\nNormalization\nUnicode Normalization\nThe use of Unicode in email addresses introduced a normalization\nproblem. Different Unicode strings can look identical and have the same\nsemantic meaning to the user. The normalized field returned on successful\nvalidation provides the correctly normalized form of the given email\naddress.\nFor example, the CJK fullwidth Latin letters are considered semantically\nequivalent in domain names to their ASCII counterparts. This library\nnormalizes them to their ASCII counterparts:\nemailinfo = validate_email(\"me@\uff24\uff4f\uff4d\uff41\uff49\uff4e.com\")\nprint(emailinfo.normalized)\nprint(emailinfo.ascii_email)\n# prints \"me@domain.com\" twice\nBecause an end-user might type their email address in different (but\nequivalent) un-normalized forms at different times, you ought to\nreplace what they enter with the normalized form immediately prior to\ngoing into your database (during account creation), querying your database\n(during login), or sending outbound mail. Normalization may also change\nthe length of an email address, and this may affect whether it is valid\nand acceptable by your SMTP provider.\nThe normalizations include lowercasing the domain part of the email\naddress (domain names are case-insensitive), Unicode \"NFC\"\nnormalization of the\nwhole address (which turns characters plus combining\ncharacters into\nprecomposed characters where possible, replacement of fullwidth and\nhalfwidth\ncharacters\nin the domain part, possibly other\nUTS46 mappings on the domain part,\nand conversion from Punycode to Unicode characters.\n(See RFC 6532 (internationalized email) section\n3.1 and RFC 5895\n(IDNA 2008) section 2.)\nOther Normalization\nNormalization is also applied to quoted-string local parts and domain\nliteral IPv6 addresses if you have allowed them by the allow_quoted_local\nand allow_domain_literal options. In quoted-string local parts, unnecessary\nbackslash escaping is removed and even the surrounding quotes are removed if\nthey are unnecessary. For IPv6 domain literals, the IPv6 address is\nnormalized to condensed form. RFC 2142\nalso requires lowercase normalization for some specific mailbox names like postmaster@.\nExamples\nFor the email address test@joshdata.me, the returned object is:\nValidatedEmail(\n  normalized='test@joshdata.me',\n  local_part='test',\n  domain='joshdata.me',\n  ascii_email='test@joshdata.me',\n  ascii_local_part='test',\n  ascii_domain='joshdata.me',\n  smtputf8=False)\nFor the fictitious but valid address example@\u30c4.\u24c1\u24be\u24bb\u24ba, which has an\ninternationalized domain but ASCII local part, the returned object is:\nValidatedEmail(\n  normalized='example@\u30c4.life',\n  local_part='example',\n  domain='\u30c4.life',\n  ascii_email='example@xn--bdk.life',\n  ascii_local_part='example',\n  ascii_domain='xn--bdk.life',\n  smtputf8=False)\nNote that normalized and other fields provide a normalized form of the\nemail address, domain name, and (in other cases) local part (see earlier\ndiscussion of normalization), which you should use in your database.\nCalling validate_email with the ASCII form of the above email address,\nexample@xn--bdk.life, returns the exact same information (i.e., the\nnormalized field always will contain Unicode characters, not Punycode).\nFor the fictitious address \u30c4-test@joshdata.me, which has an\ninternationalized local part, the returned object is:\nValidatedEmail(\n  normalized='\u30c4-test@joshdata.me',\n  local_part='\u30c4-test',\n  domain='joshdata.me',\n  ascii_email=None,\n  ascii_local_part=None,\n  ascii_domain='joshdata.me',\n  smtputf8=True)\nNow smtputf8 is True and ascii_email is None because the local\npart of the address is internationalized. The local_part and normalized fields\nreturn the normalized form of the address.\nReturn value\nWhen an email address passes validation, the fields in the returned object\nare:\n\n\n\nField\nValue\n\n\n\n\nnormalized\nThe normalized form of the email address that you should put in your database. This combines the local_part and domain fields (see below).\n\n\nascii_email\nIf set, an ASCII-only form of the normalized email address by replacing the domain part with IDNA Punycode. This field will be present when an ASCII-only form of the email address exists (including if the email address is already ASCII). If the local part of the email address contains internationalized characters, ascii_email will be None. If set, it merely combines ascii_local_part and ascii_domain.\n\n\nlocal_part\nThe normalized local part of the given email address (before the @-sign). Normalization includes Unicode NFC normalization and removing unnecessary quoted-string quotes and backslashes. If allow_quoted_local is True and the surrounding quotes are necessary, the quotes will be present in this field.\n\n\nascii_local_part\nIf set, the local part, which is composed of ASCII characters only.\n\n\ndomain\nThe canonical internationalized Unicode form of the domain part of the email address. If the returned string contains non-ASCII characters, either the SMTPUTF8 feature of your mail relay will be required to transmit the message or else the email address's domain part must be converted to IDNA ASCII first: Use ascii_domain field instead.\n\n\nascii_domain\nThe IDNA Punycode-encoded form of the domain part of the given email address, as it would be transmitted on the wire.\n\n\ndomain_address\nIf domain literals are allowed and if the email address contains one, an ipaddress.IPv4Address or ipaddress.IPv6Address object.\n\n\nsmtputf8\nA boolean indicating that the SMTPUTF8 feature of your mail relay will be required to transmit messages to this address because the local part of the address has non-ASCII characters (the local part cannot be IDNA-encoded). If allow_smtputf8=False is passed as an argument, this flag will always be false because an exception is raised if it would have been true.\n\n\nmx\nA list of (priority, domain) tuples of MX records specified in the DNS for the domain (see RFC 5321 section 5). May be None if the deliverability check could not be completed because of a temporary issue like a timeout.\n\n\nmx_fallback_type\nNone if an MX record is found. If no MX records are actually specified in DNS and instead are inferred, through an obsolete mechanism, from A or AAAA records, the value is the type of DNS record used instead (A or AAAA). May be None if the deliverability check could not be completed because of a temporary issue like a timeout.\n\n\nspf\nAny SPF record found while checking deliverability. Only set if the SPF record is queried.\n\n\n\nAssumptions\nBy design, this validator does not pass all email addresses that\nstrictly conform to the standards. Many email address forms are obsolete\nor likely to cause trouble:\n\nThe validator assumes the email address is intended to be\nusable on the public Internet. The domain part\nof the email address must be a resolvable domain name\n(see the deliverability checks described above).\nMost Special Use Domain Names\nand their subdomains, as well as\ndomain names without a ., are rejected as a syntax error\n(except see the test_environment parameter above).\nObsolete email syntaxes are rejected:\nThe unusual \"(comment)\" syntax\nis rejected. Extremely old obsolete syntaxes are\nrejected. Quoted-string local parts and domain-literal addresses\nare rejected by default, but there are options to allow them (see above).\nNo one uses these forms anymore, and I can't think of any reason why anyone\nusing this library would need to accept them.\n\nTesting\nTests can be run using\npip install -r test_requirements.txt \nmake test\nTests run with mocked DNS responses. When adding or changing tests, temporarily turn on the BUILD_MOCKED_DNS_RESPONSE_DATA flag in tests/mocked_dns_responses.py to re-build the database of mocked responses from live queries.\nFor Project Maintainers\nThe package is distributed as a universal wheel and as a source package.\nTo release:\n\nUpdate CHANGELOG.md.\nUpdate the version number in email_validator/version.py.\nMake & push a commit with the new version number and make sure tests pass.\nMake & push a tag (see command below).\nMake a release at https://github.com/JoshData/python-email-validator/releases/new.\nPublish a source and wheel distribution to pypi (see command below).\n\ngit tag v$(grep version setup.cfg | sed \"s/.*= //\")\ngit push --tags\n./release_to_pypi.sh\n\n\n"}, {"name": "einops", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\neinops\nRecent updates:\nTweets\nContents\nInstallation  \nTutorials \nAPI \nEinMix\nLayers\nNaming \nWhy use einops notation?! \nSemantic information (being verbose in expectations)\nConvenient checks\nResult is strictly determined\nUniformity\nFramework independent behavior\nIndependence of framework terminology\nSupported frameworks \nCiting einops \nSupported python versions\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\n\n\n\n\neinops_video.mp4\n\n\n\n\n\neinops\n\n\n\n\nFlexible and powerful tensor operations for readable and reliable code. \nSupports numpy, pytorch, tensorflow, jax, and others.\nRecent updates:\n\n0.7.0rc2: no-hassle torch.compile, support of array api standard and more\n10'000: github reports that more than 10k project use einops \ud83c\udf82\nsee how to use einops with torch.compile\neinops 0.6.1: paddle backend added\neinops 0.6 introduces packing and unpacking\neinops 0.5: einsum is now a part of einops\nEinops paper is accepted for oral presentation at ICLR 2022 (yes, it worth reading).\nTalk recordings are available\n\n\nPrevious updates\n- flax and oneflow backend added\n- torch.jit.script is supported for pytorch layers\n- powerful EinMix added to einops. [Einmix tutorial notebook](https://github.com/arogozhnikov/einops/blob/master/docs/3-einmix-layer.ipynb) \n\nTweets\n\nIn case you need convincing arguments for setting aside time to learn about einsum and einops...\nTim Rockt\u00e4schel, FAIR\n\n\nWriting better code with PyTorch and einops \ud83d\udc4c\nAndrej Karpathy, AI at Tesla\n\n\nSlowly but surely, einops is seeping in to every nook and cranny of my code. If you find yourself shuffling around bazillion dimensional tensors, this might change your life\nNasim Rahaman, MILA (Montreal)\n\nMore testimonials\nContents\n\nInstallation\nDocumentation\nTutorial\nAPI micro-reference\nWhy using einops\nSupported frameworks\nCiting\nRepository and discussions\n\nInstallation  \nPlain and simple:\npip install einops\nTutorials \nTutorials are the most convenient way to see einops in action\n\npart 1: einops fundamentals\npart 2: einops for deep learning\npart 3: packing and unpacking\npart 4: improve pytorch code with einops\n\nKapil Sachdeva recorded a small intro to einops.\nAPI \neinops has a minimalistic yet powerful API.\nThree core operations provided (einops tutorial\nshows those cover stacking, reshape, transposition, squeeze/unsqueeze, repeat, tile, concatenate, view and numerous reductions)\nfrom einops import rearrange, reduce, repeat\n# rearrange elements according to the pattern\noutput_tensor = rearrange(input_tensor, 't b c -> b c t')\n# combine rearrangement and reduction\noutput_tensor = reduce(input_tensor, 'b c (h h2) (w w2) -> b h w c', 'mean', h2=2, w2=2)\n# copy along a new axis\noutput_tensor = repeat(input_tensor, 'h w -> h w c', c=3)\nLater additions to the family are pack and unpack functions (better than stack/split/concatenate):\nfrom einops import pack, unpack\n# pack and unpack allow reversibly 'packing' multiple tensors into one.\n# Packed tensors may be of different dimensionality:\npacked,  ps = pack([class_token_bc, image_tokens_bhwc, text_tokens_btc], 'b * c')\nclass_emb_bc, image_emb_bhwc, text_emb_btc = unpack(transformer(packed), ps, 'b * c')\nFinally, einops provides einsum with a support of multi-lettered names:\nfrom einops import einsum, pack, unpack\n# einsum is like ... einsum, generic and flexible dot-product \n# but 1) axes can be multi-lettered  2) pattern goes last 3) works with multiple frameworks\nC = einsum(A, B, 'b t1 head c, b t2 head c -> b head t1 t2')\nEinMix\nEinMix is a generic linear layer, perfect for MLP Mixers and similar architectures.\nLayers\nEinops provides layers (einops keeps a separate version for each framework) that reflect corresponding functions\nfrom einops.layers.torch      import Rearrange, Reduce\nfrom einops.layers.tensorflow import Rearrange, Reduce\nfrom einops.layers.flax       import Rearrange, Reduce\nfrom einops.layers.paddle     import Rearrange, Reduce\nfrom einops.layers.keras      import Rearrange, Reduce\nfrom einops.layers.chainer    import Rearrange, Reduce\n\nExample of using layers within a pytorch model\nExample given for pytorch, but code in other frameworks is almost identical\nfrom torch.nn import Sequential, Conv2d, MaxPool2d, Linear, ReLU\nfrom einops.layers.torch import Rearrange\n\nmodel = Sequential(\n    ...,\n    Conv2d(6, 16, kernel_size=5),\n    MaxPool2d(kernel_size=2),\n    # flattening without need to write forward\n    Rearrange('b c h w -> b (c h w)'),  \n    Linear(16*5*5, 120), \n    ReLU(),\n    Linear(120, 10), \n)\nNo more flatten needed!\nAdditionally, torch users will benefit from layers as those are script-able and compile-able.\n\nNaming \neinops stands for Einstein-Inspired Notation for operations\n(though \"Einstein operations\" is more attractive and easier to remember).\nNotation was loosely inspired by Einstein summation (in particular by numpy.einsum operation).\nWhy use einops notation?! \nSemantic information (being verbose in expectations)\ny = x.view(x.shape[0], -1)\ny = rearrange(x, 'b c h w -> b (c h w)')\nWhile these two lines are doing the same job in some context,\nthe second one provides information about the input and output.\nIn other words, einops focuses on interface: what is the input and output, not how the output is computed.\nThe next operation looks similar:\ny = rearrange(x, 'time c h w -> time (c h w)')\nbut it gives the reader a hint:\nthis is not an independent batch of images we are processing,\nbut rather a sequence (video).\nSemantic information makes the code easier to read and maintain.\nConvenient checks\nReconsider the same example:\ny = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)\ny = rearrange(x, 'b c h w -> b (c h w)')\nThe second line checks that the input has four dimensions,\nbut you can also specify particular dimensions.\nThat's opposed to just writing comments about shapes since comments don't prevent mistakes, not tested, and without code review tend to be outdated\ny = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)\ny = rearrange(x, 'b c h w -> b (c h w)', c=256, h=19, w=19)\nResult is strictly determined\nBelow we have at least two ways to define the depth-to-space operation\n# depth-to-space\nrearrange(x, 'b c (h h2) (w w2) -> b (c h2 w2) h w', h2=2, w2=2)\nrearrange(x, 'b c (h h2) (w w2) -> b (h2 w2 c) h w', h2=2, w2=2)\nThere are at least four more ways to do it. Which one is used by the framework?\nThese details are ignored, since usually it makes no difference,\nbut it can make a big difference (e.g. if you use grouped convolutions in the next stage),\nand you'd like to specify this in your code.\nUniformity\nreduce(x, 'b c (x dx) -> b c x', 'max', dx=2)\nreduce(x, 'b c (x dx) (y dy) -> b c x y', 'max', dx=2, dy=3)\nreduce(x, 'b c (x dx) (y dy) (z dz) -> b c x y z', 'max', dx=2, dy=3, dz=4)\nThese examples demonstrated that we don't use separate operations for 1d/2d/3d pooling,\nthose are all defined in a uniform way.\nSpace-to-depth and depth-to space are defined in many frameworks but how about width-to-height? Here you go:\nrearrange(x, 'b c h (w w2) -> b c (h w2) w', w2=2)\nFramework independent behavior\nEven simple functions are defined differently by different frameworks\ny = x.flatten() # or flatten(x)\nSuppose x's shape was (3, 4, 5), then y has shape ...\n\nnumpy, pytorch, cupy, chainer: (60,)\nkeras, tensorflow.layers, gluon: (3, 20)\n\neinops works the same way in all frameworks.\nIndependence of framework terminology\nExample: tile vs repeat causes lots of confusion. To copy image along width:\nnp.tile(image, (1, 2))    # in numpy\nimage.repeat(1, 2)        # pytorch's repeat ~ numpy's tile\nWith einops you don't need to decipher which axis was repeated:\nrepeat(image, 'h w -> h (tile w)', tile=2)  # in numpy\nrepeat(image, 'h w -> h (tile w)', tile=2)  # in pytorch\nrepeat(image, 'h w -> h (tile w)', tile=2)  # in tf\nrepeat(image, 'h w -> h (tile w)', tile=2)  # in jax\nrepeat(image, 'h w -> h (tile w)', tile=2)  # in cupy\n... (etc.)\nTestimonials provide users' perspective on the same question.\nSupported frameworks \nEinops works with ...\n\nnumpy\npytorch\ntensorflow\njax\ncupy\nchainer\ntf.keras\noneflow (experimental)\nflax (experimental)\npaddle (experimental)\n\nAdditionally, starting from einops 0.7.0 einops can be used with any framework that supports Python array API standard\nCiting einops \nPlease use the following bibtex record\n@inproceedings{\n    rogozhnikov2022einops,\n    title={Einops: Clear and Reliable Tensor Manipulations with Einstein-like Notation},\n    author={Alex Rogozhnikov},\n    booktitle={International Conference on Learning Representations},\n    year={2022},\n    url={https://openreview.net/forum?id=oapKSVM2bcj}\n}\n\nSupported python versions\neinops works with python 3.8 or later.\n\n\n", "description": "einops: Provides updates, tutorials, API details, and highlights the significance of einops notation."}, {"name": "EbookLib", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nAbout EbookLib\nUsage\nReading\nWriting\nLicense\nAuthors\n\n\n\n\n\nREADME.md\n\n\n\n\nAbout EbookLib\nEbookLib is a Python library for managing EPUB2/EPUB3 and Kindle files. It's capable of reading and writing EPUB files programmatically (Kindle support is under development).\nThe API is designed to be as simple as possible, while at the same time making complex things possible too.  It has support for covers, table of contents, spine, guide, metadata and etc.\nEbookLib is used in Booktype from Sourcefabric, as well as sprits-it!, fanfiction2ebook, viserlalune and Telemeta.\nPackages of EbookLib for GNU/Linux are available in Debian and Ubuntu.\nSphinx documentation is generated from the templates in the docs/ directory and made available at http://ebooklib.readthedocs.io\nUsage\nReading\nimport ebooklib\nfrom ebooklib import epub\n\nbook = epub.read_epub('test.epub')\n\nfor image in book.get_items_of_type(ebooklib.ITEM_IMAGE):\n    print(image)\nWriting\nfrom ebooklib import epub\n\nbook = epub.EpubBook()\n\n# set metadata\nbook.set_identifier(\"id123456\")\nbook.set_title(\"Sample book\")\nbook.set_language(\"en\")\n\nbook.add_author(\"Author Authorowski\")\nbook.add_author(\n    \"Danko Bananko\",\n    file_as=\"Gospodin Danko Bananko\",\n    role=\"ill\",\n    uid=\"coauthor\",\n)\n\n# create chapter\nc1 = epub.EpubHtml(title=\"Intro\", file_name=\"chap_01.xhtml\", lang=\"hr\")\nc1.content = (\n    \"<h1>Intro heading</h1>\"\n    \"<p>Zaba je skocila u baru.</p>\"\n    '<p><img alt=\"[ebook logo]\" src=\"static/ebooklib.gif\"/><br/></p>'\n)\n\n# create image from the local image\nimage_content = open(\"ebooklib.gif\", \"rb\").read()\nimg = epub.EpubImage(\n    uid=\"image_1\",\n    file_name=\"static/ebooklib.gif\",\n    media_type=\"image/gif\",\n    content=image_content,\n)\n\n# add chapter\nbook.add_item(c1)\n# add image\nbook.add_item(img)\n\n# define Table Of Contents\nbook.toc = (\n    epub.Link(\"chap_01.xhtml\", \"Introduction\", \"intro\"),\n    (epub.Section(\"Simple book\"), (c1,)),\n)\n\n# add default NCX and Nav file\nbook.add_item(epub.EpubNcx())\nbook.add_item(epub.EpubNav())\n\n# define CSS style\nstyle = \"BODY {color: white;}\"\nnav_css = epub.EpubItem(\n    uid=\"style_nav\",\n    file_name=\"style/nav.css\",\n    media_type=\"text/css\",\n    content=style,\n)\n\n# add CSS file\nbook.add_item(nav_css)\n\n# basic spine\nbook.spine = [\"nav\", c1]\n\n# write to the file\nepub.write_epub(\"test.epub\", book, {})\nLicense\nEbookLib is licensed under the AGPL license.\nAuthors\nFull list of authors is in AUTHORS.txt file.\n\n\n", "description": "About EbookLib: Outlines its usage, reading, writing capabilities, license, and authors."}, {"name": "ebcdic", "readme": "\nebcdic is a Python package adding additional EBCDIC codecs for data\nexchange with legacy system. It works with Python 2.7 and Python 3.4+.\nEBCDIC is short for Extended Binary\nCoded Decimal Interchange Code and is a family of character encodings that is\nmainly used on mainframe computers. There is no real point in using it unless\nyou have to exchange data with legacy systems that still only support EBCDIC\nas character encoding.\n\nInstallation\nThe ebcdic package is available from https://pypi.python.org/pypi/ebcdic\nand can be installed using pip:\npip install ebcdic\n\n\nExample usage\nTo encode 'hello world' on EBCDIC systems in German speaking countries,\nuse:\n>>> import ebcdic\n>>> 'hello world'.encode('cp1141')\nb'\\x88\\x85\\x93\\x93\\x96@\\xa6\\x96\\x99\\x93\\x84O'\n\n\nSupported codecs\nThe ebcdic package includes EBCDIC codecs for the following regions:\n\ncp290 - Japan (Katakana)\ncp420 - Arabic bilingual\ncp424 - Israel (Hebrew)\ncp833 - Korea Extended (single byte)\ncp838 - Thailand\ncp870 - Eastern Europe (Poland, Hungary, Czech, Slovakia, Slovenia,\nCroatian, Serbia, Bulgarian); represents Latin-2\ncp1097 - Iran (Farsi)\ncp1140 - Australia, Brazil, Canada, New Zealand, Portugal, South Africa,\nUSA\ncp1141 - Austria, Germany, Switzerland\ncp1142 - Denmark, Norway\ncp1143 - Finland, Sweden\ncp1144 - Italy\ncp1145 - Latin America, Spain\ncp1146 - Great Britain, Ireland, North Ireland\ncp1147 - France\ncp1148 - International\ncp1148ms - International, Microsoft interpretation; similar to cp1148\nexcept that 0x15 is mapped to 0x85 (\u201cnext line\u201d) instead if 0x0a\n(\u201clinefeed\u201d)\ncp1149 - Iceland\n\nIt also includes legacy codecs:\n\ncp037 - Australia, Brazil, Canada, New Zealand, Portugal, South Africa;\nsimilar to cp1140 but without Euro sign\ncp273 - Austria, Germany, Switzerland; similar to cp1141 but without Euro\nsign\ncp277 - Denmark, Norway; similar to cp1142 but without Euro sign\ncp278 - Finland, Sweden; similar to cp1143 but without Euro sign\ncp280 - Italy; similar to cp1141 but without Euro sign\ncp284 - Latin America, Spain; similar to cp1145 but without Euro sign\ncp285 - Great Britain, Ireland, North Ireland; similar to cp1146 but\nwithout Euro sign\ncp297 - France; similar to cp1147 but without Euro sign\ncp500 - International; similar to cp1148 but without Euro sign\ncp500ms - International, Microsoft interpretation; identical to\ncodecs.cp500 similar to ebcdic.cp500 except that 0x15 is mapped to 0x85\n(\u201cnext line\u201d) instead if 0x0a (\u201clinefeed\u201d)\ncp871 - Iceland; similar to cp1149 but without Euro sign\ncp875 - Greece;  similar to cp9067 but without Euro sign and a few\nother characters\ncp1025 - Cyrillic\ncp1047 - Open Systems (MVS C compiler)\ncp1112 - Estonia, Latvia, Lithuania (Baltic)\ncp1122 - Estonia;  similar to cp1157 but without Euro sign\ncp1123 - Ukraine; similar to cp1158 but without Euro sign\n\nCodecs in the standard library overrule some of these codecs. At the time of\nthis writing this concerns cp037, cp273 (since 3.4), cp500 and cp1140.\nTo see get a list of EBCDIC codecs that are already provided by different\nsources, use ebcdic.ignored_codec_names(). For example, with Python 3.6\nthe result is:\n>>> ebcdic.ignored_codec_names()\n['cp037', 'cp1140', 'cp273', 'cp424', 'cp500', 'cp875']\n\n\nUnsupported codecs\nAccording to a\ncomprehensive list of code pages,\nthere are additional codecs this package does not support yet. Possible\nreasons and solutions are:\n\nIt\u2019s a double byte codec, e.g. cp834 (Korea). Technically CodecMapper\ncan easily support them by increasing the mapping size from 256 to 65536.\nDue lack of test date and access to Asian mainframes this was deemed too\nexperimental for now.\nThe codec contains combining characters, e.g. cp1132 (Lao) which allows\nto represent more than 256 characters combining several characters.\nJava does not include a mapping for the respective code page, e.g.\ncp410/880 (Cyrillic). You can add such a codec based on the information\nfound at the link above and submit an enhancement request for the Java\nstandard library. Once it is released, simply add the new codec to\nthe build.xml as described below.\nI missed a codec. Simply open an issue on Github at\nhttps://github.com/roskakori/CodecMapper/issues and it will be added with\nthe next version.\n\n\n\nSource code\nThese codecs have been generated using CodecMapper, available from\nhttps://github.com/roskakori/CodecMapper. Read the README in order to\nto build the ebcdic package from source.\nTo add another 8 bit EBCDIC codec just extend the ant target ebcdic in\nbuild.xml using a line like:\n<arg value=\"cpXXX\" />\nReplace XXX by the number of the 8 bit code page you want to include.\nThen run:\nant test\nto build and test the distribution.\nThe ebcdic/setup.py automatically includes the new encoding in the package\nand ebcdic/__init__.py registers it during import ebcdic, so no\nfurther steps are needed.\n\n\nChanges\nVersion 1.1.1, 2019-08-09\n\nMoved license information from README to LICENSE (#5). This required the\ndistribution to change from sdist to wheel because apparently it is a\nmajor challenge to include a text file in a platform independent way (#11).\nSadly this breaks compatibility with Python 2.6, 3.1, 3.2 and 3.3. If you\nstill need ebcdic with one of these Python versions, use\nebcdic-1.0.0.\nThis took several attempts and intermediate releases that where broken in\ndifferent ways on different platforms. To prevent people from accidentally\ninstalling one of these broken releases they have been removed from PyPI.\nIf you still want to take a look at them, use the\nrespective tags.\n\n\nVersion 1.0.0, 2019-06-06\n\nChanged development status to \u201cProduction/Stable\u201d.\nAdded international code pages cp500ms and cp1148ms which are the Microsoft\ninterpretations of the respective IBM code pages. The only difference is\nthat 0x1f is mapped to 0x85 (\u201cnext line\u201d) instead of 0x0a (\u201cnew line\u201d).\nNote that codecs.cp500 included with the Python standard library also uses\nthe Microsoft interpretation (#4).\nAdded Arabian bilingual code page 420.\nAdded Baltic code page 1112.\nAdded Cyrillic code page 1025.\nAdded Eastern Europe code page 870.\nAdded Estonian code pages 1122 and 1157.\nAdded Greek code page 875.\nAdded Farsi Bilingual code page 1097.\nAdded Hebrew code page 424 and 803.\nAdded Korean code page 833.\nAdded Meahreb/French code page 425.\nAdded Japanese (Katakana) code page 290.\nAdded Thailand code page 838.\nAdded Turkish code page 322.\nAdded Ukraine code page 1123.\nAdded Python 3.5 to 3.8 as supported version.\nImproved PEP8 conformance of generated codecs.\n\nVersion 0.7, 2014-11-17\n\nClarified which codecs are already part of the standard library and that\nthese codecs overrule the ebcdic package. Also added a function\nebcdic.ignored_codec_names() that returns the name of the EBCDIC codecs\nprovided by other means. To obtain access to ebcdic codecs overruled by\nthe standard library, use ebcdic.lookup().\nCleaned up (PEP8, __all__, typos, \u2026).\n\nVersion 0.6, 2014-11-15\n\nAdded support for Python 2.6+ and 3.1+ (#1).\nIncluded a modified version of gencodec.py that still builds maps\ninstead of tables so the generated codecs work with Python versions earlier\nthan 3.3. It also does a from __future__ import unicode_literals so the\ncodecs even work with Python 2.6+ using the same source code. As a side\neffect, this simplifies building the codecs because it removes the the need\nfor a local copy of the cpython source code.\n\nVersion 0.5, 2014-11-13\n\nInitial public release\n\n\n", "description": "ebcdic is a Python package adding additional EBCDIC codecs for data exchange with legacy systems."}, {"name": "docx2txt", "readme": "\n\n\n\n\n\n\n\n\n\n\n\npython-docx2txt\nHow to install?\nHow to run?\n\n\n\n\n\nREADME.md\n\n\n\n\npython-docx2txt\nA pure python-based utility to extract text from docx files.\nThe code is taken and adapted from python-docx. It can however also extract text from header, footer and hyperlinks. It can now also extract images.\nHow to install?\npip install docx2txt\nHow to run?\na. From command line:\n# extract text\ndocx2txt file.docx\n# extract text and images\ndocx2txt -i /tmp/img_dir file.docx\nb. From python:\nimport docx2txt\n\n# extract text\ntext = docx2txt.process(\"file.docx\")\n\n# extract text and write images in /tmp/img_dir\ntext = docx2txt.process(\"file.docx\", \"/tmp/img_dir\") \n\n\n", "description": "python docx2txt: Describes how to install the library."}, {"name": "dnspython", "readme": "\ndnspython\n\n\n\n\n\n\nINTRODUCTION\ndnspython is a DNS toolkit for Python. It supports almost all record types. It\ncan be used for queries, zone transfers, and dynamic updates. It supports TSIG\nauthenticated messages and EDNS0.\ndnspython provides both high and low level access to DNS. The high level classes\nperform queries for data of a given name, type, and class, and return an answer\nset. The low level classes allow direct manipulation of DNS zones, messages,\nnames, and records.\nTo see a few of the ways dnspython can be used, look in the examples/\ndirectory.\ndnspython is a utility to work with DNS, /etc/hosts is thus not used. For\nsimple forward DNS lookups, it's better to use socket.getaddrinfo() or\nsocket.gethostbyname().\ndnspython originated at Nominum where it was developed\nto facilitate the testing of DNS software.\nABOUT THIS RELEASE\nThis is dnspython 2.4.2.\nPlease read\nWhat's New for\ninformation about the changes in this release.\nINSTALLATION\n\n\nMany distributions have dnspython packaged for you, so you should\ncheck there first.\n\n\nTo use a wheel downloaded from PyPi, run:\npip install dnspython\n\n\nTo install from the source code, go into the top-level of the source code\nand run:\n\n\n    pip install --upgrade pip build\n    python -m build\n    pip install dist/*.whl\n\n\nTo install the latest from the master branch, run pip install git+https://github.com/rthalley/dnspython.git\n\nDnspython's default installation does not depend on any modules other than\nthose in the Python standard library.  To use some features, additional modules\nmust be installed.  For convenience, pip options are defined for the\nrequirements.\nIf you want to use DNS-over-HTTPS, run\npip install dnspython[doh].\nIf you want to use DNSSEC functionality, run\npip install dnspython[dnssec].\nIf you want to use internationalized domain names (IDNA)\nfunctionality, you must run\npip install dnspython[idna]\nIf you want to use the Trio asynchronous I/O package, run\npip install dnspython[trio].\nIf you want to use WMI on Windows to determine the active DNS settings\ninstead of the default registry scanning method, run\npip install dnspython[wmi].\nIf you want to try the experimental DNS-over-QUIC code, run\npip install dnspython[doq].\nNote that you can install any combination of the above, e.g.:\npip install dnspython[doh,dnssec,idna]\nNotices\nPython 2.x support ended with the release of 1.16.0.  Dnspython 2.0.0 through\n2.2.x support Python 3.6 and later.  For dnspython 2.3.x, the minimum\nsupported Python version is 3.7, and for 2.4.x the minimum supported verison is 3.8.\nWe plan to align future support with the lifetime of the Python 3 versions.\nDocumentation has moved to\ndnspython.readthedocs.io.\n", "description": "DNS toolkit for Python."}, {"name": "dlib", "readme": "\n\n\n\n\n\n\n\n\n\n\n\ndlib C++ library  \nCompiling dlib C++ example programs\nCompiling your own C++ programs that use dlib\nCompiling dlib Python API\nRunning the unit test suite\ndlib sponsors\n\n\n\n\n\nREADME.md\n\n\n\n\ndlib C++ library   \nDlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real world problems. See http://dlib.net for the main project documentation and API reference.\nCompiling dlib C++ example programs\nGo into the examples folder and type:\nmkdir build; cd build; cmake .. ; cmake --build .\nThat will build all the examples.\nIf you have a CPU that supports AVX instructions then turn them on like this:\nmkdir build; cd build; cmake .. -DUSE_AVX_INSTRUCTIONS=1; cmake --build .\nDoing so will make some things run faster.\nFinally, Visual Studio users should usually do everything in 64bit mode.  By default Visual Studio is 32bit, both in its outputs and its own execution, so you have to explicitly tell it to use 64bits.  Since it's not the 1990s anymore you probably want to use 64bits.  Do that with a cmake invocation like this:\ncmake .. -G \"Visual Studio 14 2015 Win64\" -T host=x64 \nCompiling your own C++ programs that use dlib\nThe examples folder has a CMake tutorial that tells you what to do.  There are also additional instructions on the dlib web site.\nAlternatively, if you are using the vcpkg dependency manager you can download and install dlib with CMake integration in a single command:\nvcpkg install dlib\nCompiling dlib Python API\nBefore you can run the Python example programs you must install the build requirement.\npython -m venv venv\npip install build\nThen you must compile dlib and install it in your environment. Type:\npython -m build --wheel\npip install dist/dlib-<version>.whl\nOr download dlib using PyPi:\npip install dlib\nRunning the unit test suite\nType the following to compile and run the dlib unit test suite:\ncd dlib/test\nmkdir build\ncd build\ncmake ..\ncmake --build . --config Release\n./dtest --runall\nNote that on windows your compiler might put the test executable in a subfolder called Release. If that's the case then you have to go to that folder before running the test.\nThis library is licensed under the Boost Software License, which can be found in dlib/LICENSE.txt.  The long and short of the license is that you can use dlib however you like, even in closed source commercial software.\ndlib sponsors\nThis research is based in part upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA) under contract number 2014-14071600010. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, or the U.S. Government.\n\n\n", "description": "dlib is a C++ library with details on compiling, example programs, the Python API, and unit tests."}, {"name": "dill", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ndill\nAbout Dill\nMajor Features\nCurrent Release\n\nDevelopment Version\n\nInstallation\nRequirements\nBasic Usage\nMore Information\nCitation\n\n\n\n\n\nREADME.md\n\n\n\n\ndill\nserialize all of Python\nAbout Dill\ndill extends Python's pickle module for serializing and de-serializing\nPython objects to the majority of the built-in Python types. Serialization\nis the process of converting an object to a byte stream, and the inverse\nof which is converting a byte stream back to a Python object hierarchy.\ndill provides the user the same interface as the pickle module, and\nalso includes some additional features. In addition to pickling Python\nobjects, dill provides the ability to save the state of an interpreter\nsession in a single command.  Hence, it would be feasible to save an\ninterpreter session, close the interpreter, ship the pickled file to\nanother computer, open a new interpreter, unpickle the session and\nthus continue from the 'saved' state of the original interpreter\nsession.\ndill can be used to store Python objects to a file, but the primary\nusage is to send Python objects across the network as a byte stream.\ndill is quite flexible, and allows arbitrary user defined classes\nand functions to be serialized.  Thus dill is not intended to be\nsecure against erroneously or maliciously constructed data. It is\nleft to the user to decide whether the data they unpickle is from\na trustworthy source.\ndill is part of pathos, a Python framework for heterogeneous computing.\ndill is in active development, so any user feedback, bug reports, comments,\nor suggestions are highly appreciated.  A list of issues is located at\nhttps://github.com/uqfoundation/dill/issues, with a legacy list maintained at\nhttps://uqfoundation.github.io/project/pathos/query.\nMajor Features\ndill can pickle the following standard types:\n\nnone, type, bool, int, float, complex, bytes, str,\ntuple, list, dict, file, buffer, builtin,\nPython classes, namedtuples, dataclasses, metaclasses,\ninstances of classes,\nset, frozenset, array, functions, exceptions\n\ndill can also pickle more 'exotic' standard types:\n\nfunctions with yields, nested functions, lambdas,\ncell, method, unboundmethod, module, code, methodwrapper,\nmethoddescriptor, getsetdescriptor, memberdescriptor, wrapperdescriptor,\ndictproxy, slice, notimplemented, ellipsis, quit\n\ndill cannot yet pickle these standard types:\n\nframe, generator, traceback\n\ndill also provides the capability to:\n\nsave and load Python interpreter sessions\nsave and extract the source code from functions and classes\ninteractively diagnose pickling errors\n\nCurrent Release\n\n\n\nThe latest released version of dill is available from:\nhttps://pypi.org/project/dill\ndill is distributed under a 3-clause BSD license.\nDevelopment Version\n\n\n\n\nYou can get the latest development version with all the shiny new features at:\nhttps://github.com/uqfoundation\nIf you have a new contribution, please submit a pull request.\nInstallation\ndill can be installed with pip::\n$ pip install dill\n\nTo optionally include the objgraph diagnostic tool in the install::\n$ pip install dill[graph]\n\nTo optionally include the gprof2dot diagnostic tool in the install::\n$ pip install dill[profile]\n\nFor windows users, to optionally install session history tools::\n$ pip install dill[readline]\n\nRequirements\ndill requires:\n\npython (or pypy), >=3.8\nsetuptools, >=42\n\nOptional requirements:\n\nobjgraph, >=1.7.2\ngprof2dot, >=2022.7.29\npyreadline, >=1.7.1 (on windows)\n\nBasic Usage\ndill is a drop-in replacement for pickle. Existing code can be\nupdated to allow complete pickling using::\n>>> import dill as pickle\n\nor::\n>>> from dill import dumps, loads\n\ndumps converts the object to a unique byte string, and loads performs\nthe inverse operation::\n>>> squared = lambda x: x**2\n>>> loads(dumps(squared))(3)\n9\n\nThere are a number of options to control serialization which are provided\nas keyword arguments to several dill functions:\n\nwith protocol, the pickle protocol level can be set. This uses the\nsame value as the pickle module, DEFAULT_PROTOCOL.\nwith byref=True, dill to behave a lot more like pickle with\ncertain objects (like modules) pickled by reference as opposed to\nattempting to pickle the object itself.\nwith recurse=True, objects referred to in the global dictionary are\nrecursively traced and pickled, instead of the default behavior of\nattempting to store the entire global dictionary.\nwith fmode, the contents of the file can be pickled along with the file\nhandle, which is useful if the object is being sent over the wire to a\nremote system which does not have the original file on disk. Options are\nHANDLE_FMODE for just the handle, CONTENTS_FMODE for the file content\nand FILE_FMODE for content and handle.\nwith ignore=False, objects reconstructed with types defined in the\ntop-level script environment use the existing type in the environment\nrather than a possibly different reconstructed type.\n\nThe default serialization can also be set globally in dill.settings.\nThus, we can modify how dill handles references to the global dictionary\nlocally or globally::\n>>> import dill.settings\n>>> dumps(absolute) == dumps(absolute, recurse=True)\nFalse\n>>> dill.settings['recurse'] = True\n>>> dumps(absolute) == dumps(absolute, recurse=True)\nTrue\n\ndill also includes source code inspection, as an alternate to pickling::\n>>> import dill.source\n>>> print(dill.source.getsource(squared))\nsquared = lambda x:x**2\n\nTo aid in debugging pickling issues, use dill.detect which provides\ntools like pickle tracing::\n>>> import dill.detect\n>>> with dill.detect.trace():\n>>>     dumps(squared)\n\u252c F1: <function <lambda> at 0x7fe074f8c280>\n\u251c\u252c F2: <function _create_function at 0x7fe074c49c10>\n\u2502\u2514 # F2 [34 B]\n\u251c\u252c Co: <code object <lambda> at 0x7fe07501eb30, file \"<stdin>\", line 1>\n\u2502\u251c\u252c F2: <function _create_code at 0x7fe074c49ca0>\n\u2502\u2502\u2514 # F2 [19 B]\n\u2502\u2514 # Co [87 B]\n\u251c\u252c D1: <dict object at 0x7fe0750d4680>\n\u2502\u2514 # D1 [22 B]\n\u251c\u252c D2: <dict object at 0x7fe074c5a1c0>\n\u2502\u2514 # D2 [2 B]\n\u251c\u252c D2: <dict object at 0x7fe074f903c0>\n\u2502\u251c\u252c D2: <dict object at 0x7fe074f8ebc0>\n\u2502\u2502\u2514 # D2 [2 B]\n\u2502\u2514 # D2 [23 B]\n\u2514 # F1 [180 B]\n\nWith trace, we see how dill stored the lambda (F1) by first storing\n_create_function, the underlying code object (Co) and _create_code\n(which is used to handle code objects), then we handle the reference to\nthe global dict (D2) plus other dictionaries (D1 and D2) that\nsave the lambda object's state. A # marks when the object is actually stored.\nMore Information\nProbably the best way to get started is to look at the documentation at\nhttp://dill.rtfd.io. Also see dill.tests for a set of scripts that\ndemonstrate how dill can serialize different Python objects. You can\nrun the test suite with python -m dill.tests. The contents of any\npickle file can be examined with undill.  As dill conforms to\nthe pickle interface, the examples and documentation found at\nhttp://docs.python.org/library/pickle.html also apply to dill\nif one will import dill as pickle. The source code is also generally\nwell documented, so further questions may be resolved by inspecting the\ncode itself. Please feel free to submit a ticket on github, or ask a\nquestion on stackoverflow (@Mike McKerns).\nIf you would like to share how you use dill in your work, please send\nan email (to mmckerns at uqfoundation dot org).\nCitation\nIf you use dill to do research that leads to publication, we ask that you\nacknowledge use of dill by citing the following in your publication::\nM.M. McKerns, L. Strand, T. Sullivan, A. Fang, M.A.G. Aivazis,\n\"Building a framework for predictive science\", Proceedings of\nthe 10th Python in Science Conference, 2011;\nhttp://arxiv.org/pdf/1202.1056\n\nMichael McKerns and Michael Aivazis,\n\"pathos: a framework for heterogeneous computing\", 2010- ;\nhttps://uqfoundation.github.io/project/pathos\n\nPlease see https://uqfoundation.github.io/project/pathos or\nhttp://arxiv.org/pdf/1202.1056 for further information.\n\n\n", "description": "Serialize all of Python."}, {"name": "deprecat", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ndeprecat decorator\nInstallation\nCompatibility\nUsage\nDeprecated function\nDeprecated method\nDeprecated class\nSphinx Decorator - Functions\nDeprecated kwargs\nAuthors\n\n\n\n\n\nREADME.md\n\n\n\n\n\ndeprecat decorator\nPython @deprecat decorator to deprecate old python classes, functions or methods.\nInstallation\npip install deprecat\nCompatibility\nPython >=3.6\nUsage\nDeprecated function\nfrom deprecat import deprecat\n\n@deprecat(reason=\"this is a bad function\", version = '2.0')\ndef some_deprecated_function(x, y):\n    return x+y\n\nIf the user tries to use the deprecated function\nsome_deprecated_function(2, 3), they will have a warning:\n5\n\nDeprecationWarning: Call to deprecated function (or staticmethod) some_deprecated_function. (this is a bad function) -- Deprecated since version 2.0.\n\nsome_deprecated_function(2, 3)\n\nDeprecated method\nfrom deprecat import deprecat\n\nclass thisclassisuseful:\n\n    def __init__(self,value):\n        self.value = value\n\n    @deprecat(reason=\"this is a bad method\", version = 2.0)\n    def some_deprecated_function(self):\n        print(self.value)\n\nLet's try running this:\nx = thisclassisuseful('abc')\nx.some_deprecated_function()\n\nHere's what we get:\nabc\n\nDeprecationWarning: Call to deprecated method some_deprecated_function. \n(this is a bad method) -- Deprecated since version 2.0.\n\nx.some_deprecated_function()\n\nDeprecated class\nfrom deprecat import deprecat\n\n@deprecat(reason=\"useless\", version = 2.0)\nclass badclass:\n\n    def __init__(self):\n        print(\"you just ran this class\")\n\nNow when we call badclass() we get:\nyou just ran this class\n\nDeprecationWarning: Call to deprecated class badclass. \n(useless) -- Deprecated since version 2.0.\n\nbadclass()\n\nSphinx Decorator - Functions\nYou can use the sphinx decorator in deprecat to emit warnings and add a\nsphinx warning directive with custom title (using admonition) in\ndocstring. Let's say this is our function (this can be done for methods\nand classes as well, just like the classic deprecat decorator)\nfrom deprecat.sphinx import deprecat\n\n@deprecat(\n    reason=\"\"\" this is very buggy say bye\"\"\",\n    version='0.3.0')\ndef myfunction(x):\n    \"\"\"\n    Calculate the square of a number.\n\n    :param x: a number\n    :return: number * number\n    \"\"\"\n    return x*x\n\nNow when we try to use this as myfunction(3) we get the warning as\nusual:\nDeprecationWarning: Call to deprecated function (or staticmethod) myfunction. ( this is very buggy say bye) -- Deprecated since version 0.3.0.\n\nmyfunction(3)\n\n9\n\nAdditionally, we have a modified docstring (print(myfunction.__doc__)\nas follows:\nCalculate the square of a number.\n\n:param x: a number\n:return: number * number\n\n.. deprecated:: 0.3.0\n  this is very buggy say bye\n\nDeprecated kwargs\nSuppose you have this function where two of the keyword arguments are\nnot useful anymore so you can deprecate them like this.\nfrom deprecat.sphinx import deprecat\n\n@deprecat(deprecated_args={'a':{'version':'4.0','reason':'nothing'}, 'b':{'version':'3.0','reason':'something'}})\ndef multiply(a, b, c):\n    \"\"\"\n    Compute the product\n\n    Parameters\n    ----------\n    a: float\n        a is a nice number\n\n    b: float\n        b is also a nice number\n\n    c: float\n        c is ok too\n    \"\"\"\n    return a*b*c\n\nThis is the output we get when we try to run multiply(a=1,b=2,c=3)\nDeprecationWarning: Call to deprecated Parameter b. (something) -- Deprecated since v3.0.\nmultiply(a=1,b=2,c=3)\n\nDeprecationWarning: Call to deprecated Parameter a. (nothing) -- Deprecated since v4.0.\nmultiply(a=1,b=2,c=3)\n\n6\n\nNow, the cool part is your docstring (multiply.__doc__) get's modified\nas well. This is how it renders in Sphinx\nAuthors\nThe authors of this library are:\nMarcos CARDOSO, and\nLaurent LAPORTE.\nThe original code was made in this StackOverflow post by\nLeandro REGUEIRO,\nPatrizio BERTONI, and\nEric WIESER.\nModified and now maintained by: Meenal Jhajharia\n\n\n", "description": "deprecat decorator: Discusses installation, compatibility, and various uses of deprecated functions, methods, and classes."}, {"name": "defusedxml", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ndefusedxml -- defusing XML bombs and other exploits\nSynopsis\nAttack vectors\nbillion laughs / exponential entity expansion\nquadratic blowup entity expansion\nexternal entity expansion (remote)\nexternal entity expansion (local file)\nDTD retrieval\nPython XML Libraries\nSettings in standard library\nxml.sax.handler Features\nDOM xml.dom.xmlbuilder.Options\ndefusedxml\ndefusedxml (package)\ndefusedxml.cElementTree\ndefusedxml.ElementTree\ndefusedxml.expatreader\ndefusedxml.sax\ndefusedxml.expatbuilder\ndefusedxml.minidom\ndefusedxml.pulldom\ndefusedxml.xmlrpc\ndefusedxml.lxml\ndefusedexpat\nModifications in expat\nHow to avoid XML vulnerabilities\nBest practices\nOther things to consider\nattribute blowup / hash collision attack\ndecompression bomb\nProcessing Instruction\nOther DTD features\nXPath\nXPath injection attacks\nXInclude\nXMLSchema location\nXSL Transformation\nRelated CVEs\nOther languages / frameworks\nPerl\nRuby\nPHP\nC# / .NET / Mono\nJava\nTODO\nLicense\nAcknowledgements\nReferences\nChangelog\ndefusedxml 0.8.0.dev1\ndefusedxml 0.7.0\ndefusedxml 0.7.0rc2\ndefusedxml 0.7.0rc1\ndefusedxml 0.6.0\ndefusedxml 0.6.0rc1\ndefusedxml 0.5.0\ndefusedxml 0.5.0.rc1\ndefusedxml 0.4.1\ndefusedxml 0.4\ndefusedxml 0.3\ndefusedxml 0.2\ndefusedxml 0.1\n\n\n\n\n\nREADME.md\n\n\n\n\ndefusedxml -- defusing XML bombs and other exploits\n\n\n\n\n\n\nChristian Heimes <christian@python.org>\nSynopsis\nThe results of an attack on a vulnerable XML library can be fairly\ndramatic. With just a few hundred Bytes of XML data an attacker can\noccupy several Gigabytes of memory within seconds. An attacker\ncan also keep CPUs busy for a long time with a small to medium size\nrequest. Under some circumstances it is even possible to access local\nfiles on your server, to circumvent a firewall, or to abuse services to\nrebound attacks to third parties.\nThe attacks use and abuse less common features of XML and its parsers.\nThe majority of developers are unacquainted with features such as\nprocessing instructions and entity expansions that XML inherited from\nSGML. At best they know about <!DOCTYPE> from experience with HTML but\nthey are not aware that a document type definition (DTD) can generate an\nHTTP request or load a file from the file system.\nNone of the issues is new. They have been known for a long time. Billion\nlaughs was first reported in 2003. Nevertheless some XML libraries and\napplications are still vulnerable and even heavy users of XML are\nsurprised by these features. It's hard to say whom to blame for the\nsituation. It's too short sighted to shift all blame on XML parsers and\nXML libraries for using insecure default settings. After all they\nproperly implement XML specifications. Application developers must not\nrely that a library is always configured for security and potential\nharmful data by default.\n\nTable of Contents\n\nAttack vectors\nbillion laughs / exponential entity expansion\nThe Billion Laughs\nattack -- also known as exponential entity expansion --uses multiple\nlevels of nested entities. The original example uses 9 levels of 10\nexpansions in each level to expand the string lol to a string of 3 *\n10 9 bytes, hence the name \"billion laughs\". The resulting\nstring occupies 3 GB (2.79 GiB) of memory; intermediate strings require\nadditional memory. Because most parsers don't cache the intermediate\nstep for every expansion it is repeated over and over again. It\nincreases the CPU load even more.\nAn XML document of just a few hundred bytes can disrupt all services on\na machine within seconds.\nExample XML:\n<!DOCTYPE xmlbomb [\n<!ENTITY a \"1234567890\" >\n<!ENTITY b \"&a;&a;&a;&a;&a;&a;&a;&a;\">\n<!ENTITY c \"&b;&b;&b;&b;&b;&b;&b;&b;\">\n<!ENTITY d \"&c;&c;&c;&c;&c;&c;&c;&c;\">\n]>\n<bomb>&d;</bomb>\n\nquadratic blowup entity expansion\nA quadratic blowup attack is similar to a Billion\nLaughs attack; it abuses\nentity expansion, too. Instead of nested entities it repeats one large\nentity with a couple of thousand chars over and over again. The attack\nisn't as efficient as the exponential case but it avoids triggering\ncountermeasures of parsers against heavily nested entities. Some parsers\nlimit the depth and breadth of a single entity but not the total amount\nof expanded text throughout an entire XML document.\nA medium-sized XML document with a couple of hundred kilobytes can\nrequire a couple of hundred MB to several GB of memory. When the attack\nis combined with some level of nested expansion an attacker is able to\nachieve a higher ratio of success.\n<!DOCTYPE bomb [\n<!ENTITY a \"xxxxxxx... a couple of ten thousand chars\">\n]>\n<bomb>&a;&a;&a;... repeat</bomb>\n\nexternal entity expansion (remote)\nEntity declarations can contain more than just text for replacement.\nThey can also point to external resources by public identifiers or\nsystem identifiers. System identifiers are standard URIs. When the URI\nis a URL (e.g. a http:// locator) some parsers download the resource\nfrom the remote location and embed them into the XML document verbatim.\nSimple example of a parsed external entity:\n<!DOCTYPE external [\n<!ENTITY ee SYSTEM \"http://www.python.org/some.xml\">\n]>\n<root>&ee;</root>\n\nThe case of parsed external entities works only for valid XML content.\nThe XML standard also supports unparsed external entities with a NData declaration.\nExternal entity expansion opens the door to plenty of exploits. An\nattacker can abuse a vulnerable XML library and application to rebound\nand forward network requests with the IP address of the server. It\nhighly depends on the parser and the application what kind of exploit is\npossible. For example:\n\nAn attacker can circumvent firewalls and gain access to restricted\nresources as all the requests are made from an internal and\ntrustworthy IP address, not from the outside.\nAn attacker can abuse a service to attack, spy on or DoS your\nservers but also third party services. The attack is disguised with\nthe IP address of the server and the attacker is able to utilize the\nhigh bandwidth of a big machine.\nAn attacker can exhaust additional resources on the machine, e.g.\nwith requests to a service that doesn't respond or responds with\nvery large files.\nAn attacker may gain knowledge, when, how often and from which IP\naddress an XML document is accessed.\nAn attacker could send mail from inside your network if the URL\nhandler supports smtp:// URIs.\n\nexternal entity expansion (local file)\nExternal entities with references to local files are a sub-case of\nexternal entity expansion. It's listed as an extra attack because it\ndeserves extra attention. Some XML libraries such as lxml disable\nnetwork access by default but still allow entity expansion with local\nfile access by default. Local files are either referenced with a\nfile:// URL or by a file path (either relative or absolute).\nAn attacker may be able to access and download all files that can be\nread by the application process. This may include critical configuration\nfiles, too.\n<!DOCTYPE external [\n<!ENTITY ee SYSTEM \"file:///PATH/TO/simple.xml\">\n]>\n<root>&ee;</root>\n\nDTD retrieval\nThis case is similar to external entity expansion, too. Some XML\nlibraries like Python's xml.dom.pulldom retrieve document type\ndefinitions from remote or local locations. Several attack scenarios\nfrom the external entity case apply to this issue as well.\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\"\n  \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\">\n<html>\n    <head/>\n    <body>text</body>\n</html>\n\nPython XML Libraries\n\n\n\nkind\nsax\netree\nminidom\npulldom\nxmlrpc\nlxml\ngenshi\n\n\n\n\nbillion laughs\nTrue\nTrue\nTrue\nTrue\nTrue\nFalse (1)\nFalse (5)\n\n\nquadratic blowup\nTrue\nTrue\nTrue\nTrue\nTrue\nTrue\nFalse (5)\n\n\nexternal entity expansion (remote)\nTrue\nFalse (3)\nFalse (4)\nTrue\nfalse\nFalse (1)\nFalse (5)\n\n\nexternal entity expansion (local file)\nTrue\nFalse (3)\nFalse (4)\nTrue\nfalse\nTrue\nFalse (5)\n\n\nDTD retrieval\nTrue\nFalse\nFalse\nTrue\nfalse\nFalse (1)\nFalse\n\n\ngzip bomb\nFalse\nFalse\nFalse\nFalse\nTrue\npartly (2)\nFalse\n\n\nxpath support (7)\nFalse\nFalse\nFalse\nFalse\nFalse\nTrue\nFalse\n\n\nxsl(t) support (7)\nFalse\nFalse\nFalse\nFalse\nFalse\nTrue\nFalse\n\n\nxinclude support (7)\nFalse\nTrue (6)\nFalse\nFalse\nFalse\nTrue (6)\nTrue\n\n\nC library\nexpat\nexpat\nexpat\nexpat\nexpat\nlibxml2\nexpat\n\n\n\nvulnerabilities and features\n\nLxml is protected against billion laughs attacks and doesn't do\nnetwork lookups by default.\nlibxml2 and lxml are not directly vulnerable to gzip decompression\nbombs but they don't protect you against them either.\nxml.etree doesn't expand entities and raises a ParserError when an\nentity occurs.\nminidom doesn't expand entities and simply returns the unexpanded\nentity verbatim.\ngenshi.input of genshi 0.6 doesn't support entity expansion and\nraises a ParserError when an entity occurs.\nLibrary has (limited) XInclude support but requires an additional\nstep to process inclusion.\nThese are features but they may introduce exploitable holes, see\nOther things to consider\n\nSettings in standard library\nxml.sax.handler Features\n\n\nfeature_external_ges\n(http://xml.org/sax/features/external-general-entities)\ndisables external entity expansion\n\n\nfeature_external_pes\n(http://xml.org/sax/features/external-parameter-entities)\nthe option is ignored and doesn't modify any functionality\n\n\nDOM xml.dom.xmlbuilder.Options\n\n\nexternal_parameter_entities\nignored\n\n\nexternal_general_entities\nignored\n\n\nexternal_dtd_subset\nignored\n\n\nentities\nunsure\n\n\ndefusedxml\nThe defusedxml package\n(defusedxml on PyPI) contains\nseveral Python-only workarounds and fixes for denial of service and\nother vulnerabilities in Python's XML libraries. In order to benefit\nfrom the protection you just have to import and use the listed functions\n/ classes from the right defusedxml module instead of the original\nmodule. Merely defusedxml.xmlrpc is implemented as\nmonkey patch.\nInstead of:\n>>> from xml.etree.ElementTree import parse\n>>> et = parse(xmlfile)\n\nalter code to:\n>>> from defusedxml.ElementTree import parse\n>>> et = parse(xmlfile)\n\nAdditionally the package has an untested function to monkey patch\nall stdlib modules with defusedxml.defuse_stdlib().\nAll functions and parser classes accept three additional keyword\narguments. They return either the same objects as the original functions\nor compatible subclasses.\n\n\nforbid_dtd (default: False)\ndisallow XML with a <!DOCTYPE> processing instruction and raise a\nDTDForbidden exception when a DTD processing instruction is found.\n\n\nforbid_entities (default: True)\ndisallow XML with <!ENTITY> declarations inside the DTD and raise\nan EntitiesForbidden exception when an entity is declared.\n\n\nforbid_external (default: True)\ndisallow any access to remote or local resources in external\nentities or DTD and raising an ExternalReferenceForbidden\nexception when a DTD or entity references an external resource.\n\n\ndefusedxml (package)\nDefusedXmlException, DTDForbidden, EntitiesForbidden,\nExternalReferenceForbidden, NotSupportedError\ndefuse_stdlib() (experimental)\ndefusedxml.cElementTree\nNOTE defusedxml.cElementTree is deprecated and will be removed in\na future release. Import from defusedxml.ElementTree instead.\nparse(), iterparse(), fromstring(), XMLParser\ndefusedxml.ElementTree\nparse(), iterparse(), fromstring(), XMLParser\ndefusedxml.expatreader\ncreate_parser(), DefusedExpatParser\ndefusedxml.sax\nparse(), parseString(), make_parser()\ndefusedxml.expatbuilder\nparse(), parseString(), DefusedExpatBuilder, DefusedExpatBuilderNS\ndefusedxml.minidom\nparse(), parseString()\ndefusedxml.pulldom\nparse(), parseString()\ndefusedxml.xmlrpc\nThe fix is implemented as monkey patch for the stdlib's xmlrpc package\n(3.x) or xmlrpclib module (2.x). The function\nmonkey_patch() enables the fixes,\nunmonkey_patch() removes the patch and\nputs the code in its former state.\nThe monkey patch protects against XML related attacks as well as\ndecompression bombs and excessively large requests or responses. The\ndefault setting is 30 MB for requests, responses and gzip decompression.\nYou can modify the default by changing the module variable\nMAX_DATA. A value of\n-1 disables the limit.\ndefusedxml.lxml\nDEPRECATED The module is deprecated and will be removed in a future\nrelease.\nThe module acts as an example how you could protect code that uses\nlxml.etree. It implements a custom Element class that filters out Entity\ninstances, a custom parser factory and a thread local storage for parser\ninstances. It also has a check_docinfo() function which inspects a tree\nfor internal or external DTDs and entity declarations. In order to check\nfor entities lxml > 3.0 is required.\nparse(), fromstring() RestrictedElement, GlobalParserTLS,\ngetDefaultParser(), check_docinfo()\ndefusedexpat\nThe defusedexpat package\n(defusedexpat on PyPI)\ncomes with binary extensions and a modified\nexpat library instead of the standard\nexpat parser. It's basically a\nstand-alone version of the patches for Python's standard library C\nextensions.\nModifications in expat\nnew definitions:\nXML_BOMB_PROTECTION\nXML_DEFAULT_MAX_ENTITY_INDIRECTIONS\nXML_DEFAULT_MAX_ENTITY_EXPANSIONS\nXML_DEFAULT_RESET_DTD\n\nnew XML_FeatureEnum members:\nXML_FEATURE_MAX_ENTITY_INDIRECTIONS\nXML_FEATURE_MAX_ENTITY_EXPANSIONS\nXML_FEATURE_IGNORE_DTD\n\nnew XML_Error members:\nXML_ERROR_ENTITY_INDIRECTIONS\nXML_ERROR_ENTITY_EXPANSION\n\nnew API functions:\nint XML_GetFeature(XML_Parser parser,\n                   enum XML_FeatureEnum feature,\n                   long *value);\nint XML_SetFeature(XML_Parser parser,\n                   enum XML_FeatureEnum feature,\n                   long value);\nint XML_GetFeatureDefault(enum XML_FeatureEnum feature,\n                          long *value);\nint XML_SetFeatureDefault(enum XML_FeatureEnum feature,\n                          long value);\n\n\n\nXML_FEATURE_MAX_ENTITY_INDIRECTIONS\nLimit the amount of indirections that are allowed to occur during\nthe expansion of a nested entity. A counter starts when an entity\nreference is encountered. It resets after the entity is fully\nexpanded. The limit protects the parser against exponential entity\nexpansion attacks (aka billion laughs attack). When the limit is\nexceeded the parser stops and fails with\nXML_ERROR_ENTITY_INDIRECTIONS. A\nvalue of 0 disables the protection.\n\n\nSupported range\n0 .. UINT_MAX\n\n\nDefault\n40\n\n\n\n\nXML_FEATURE_MAX_ENTITY_EXPANSIONS\nLimit the total length of all entity expansions throughout the\nentire document. The lengths of all entities are accumulated in a\nparser variable. The setting protects against quadratic blowup\nattacks (lots of expansions of a large entity declaration). When the\nsum of all entities exceeds the limit, the parser stops and fails\nwith XML_ERROR_ENTITY_EXPANSION. A\nvalue of 0 disables the protection.\n\n\nSupported range\n0 .. UINT_MAX\n\n\nDefault\n8 MiB\n\n\n\n\nXML_FEATURE_RESET_DTD\nReset all DTD information after the <!DOCTYPE> block has been\nparsed. When the flag is set (default: false) all DTD information\nafter the endDoctypeDeclHandler has been called. The flag can be set\ninside the endDoctypeDeclHandler. Without DTD information any entity\nreference in the document body leads to\nXML_ERROR_UNDEFINED_ENTITY.\n\n\nSupported range\n0, 1\n\n\nDefault\n0\n\n\n\n\nHow to avoid XML vulnerabilities\nBest practices\n\nDon't allow DTDs\nDon't expand entities\nDon't resolve externals\nLimit parse depth\nLimit total input size\nLimit parse time\nFavor a SAX or iterparse-like parser for potential large data\nValidate and properly quote arguments to XSL transformations and\nXPath queries\nDon't use XPath expression from untrusted sources\nDon't apply XSL transformations that come untrusted sources\n\n(based on Brad Hill's Attacking XML\nSecurity)\nOther things to consider\nXML, XML parsers and processing libraries have more features and\npossible issue that could lead to DoS vulnerabilities or security\nexploits in applications. I have compiled an incomplete list of\ntheoretical issues that need further research and more attention. The\nlist is deliberately pessimistic and a bit paranoid, too. It contains\nthings that might go wrong under daffy circumstances.\nattribute blowup / hash collision attack\nXML parsers may use an algorithm with quadratic runtime O(n\n2) to handle attributes and namespaces. If it uses hash\ntables (dictionaries) to store attributes and namespaces the\nimplementation may be vulnerable to hash collision attacks, thus\nreducing the performance to O(n 2) again. In either case an\nattacker is able to forge a denial of service attack with an XML\ndocument that contains thousands upon thousands of attributes in a\nsingle node.\nI haven't researched yet if expat, pyexpat or libxml2 are vulnerable.\ndecompression bomb\nThe issue of decompression bombs (aka ZIP\nbomb) apply to all XML\nlibraries that can parse compressed XML stream like gzipped HTTP streams\nor LZMA-ed files. For an attacker it can reduce the amount of\ntransmitted data by three magnitudes or more. Gzip is able to compress 1\nGiB zeros to roughly 1 MB, lzma is even better:\n$ dd if=/dev/zero bs=1M count=1024 | gzip > zeros.gz\n$ dd if=/dev/zero bs=1M count=1024 | lzma -z > zeros.xy\n$ ls -sh zeros.*\n1020K zeros.gz\n 148K zeros.xy\n\nNone of Python's standard XML libraries decompress streams except for\nxmlrpclib. The module is vulnerable\n<https://bugs.python.org/issue16043> to decompression bombs.\nlxml can load and process compressed data through libxml2 transparently.\nlibxml2 can handle even very large blobs of compressed data efficiently\nwithout using too much memory. But it doesn't protect applications from\ndecompression bombs. A carefully written SAX or iterparse-like approach\ncan be safe.\nProcessing Instruction\nPI's like:\n<?xml-stylesheet type=\"text/xsl\" href=\"style.xsl\"?>\n\nmay impose more threats for XML processing. It depends if and how a\nprocessor handles processing instructions. The issue of URL retrieval\nwith network or local file access apply to processing instructions, too.\nOther DTD features\nDTD has more\nfeatures like <!NOTATION>. I haven't researched how these features may\nbe a security threat.\nXPath\nXPath statements may introduce DoS vulnerabilities. Code should never\nexecute queries from untrusted sources. An attacker may also be able to\ncreate an XML document that makes certain XPath queries costly or\nresource hungry.\nXPath injection attacks\nXPath injeciton attacks pretty much work like SQL injection attacks.\nArguments to XPath queries must be quoted and validated properly,\nespecially when they are taken from the user. The page Avoid the\ndangers of XPath\ninjection\nlist some ramifications of XPath injections.\nPython's standard library doesn't have XPath support. Lxml supports\nparameterized XPath queries which does proper quoting. You just have to\nuse its xpath() method correctly:\n# DON'T\n>>> tree.xpath(\"/tag[@id='%s']\" % value)\n\n# instead do\n>>> tree.xpath(\"/tag[@id=$tagid]\", tagid=name)\n\nXInclude\nXML Inclusion is\nanother way to load and include external files:\n<root xmlns:xi=\"http://www.w3.org/2001/XInclude\">\n  <xi:include href=\"filename.txt\" parse=\"text\" />\n</root>\n\nThis feature should be disabled when XML files from an untrusted source\nare processed. Some Python XML libraries and libxml2 support XInclude\nbut don't have an option to sandbox inclusion and limit it to allowed\ndirectories.\nXMLSchema location\nA validating XML parser may download schema files from the information\nin a xsi:schemaLocation attribute.\n<ead xmlns=\"urn:isbn:1-931666-22-9\"\n     xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n     xsi:schemaLocation=\"urn:isbn:1-931666-22-9 http://www.loc.gov/ead/ead.xsd\">\n</ead>\n\nXSL Transformation\nYou should keep in mind that XSLT is a Turing complete language. Never\nprocess XSLT code from unknown or untrusted source! XSLT processors may\nallow you to interact with external resources in ways you can't even\nimagine. Some processors even support extensions that allow read/write\naccess to file system, access to JRE objects or scripting with Jython.\nExample from Attacking XML\nSecurity\nfor Xalan-J:\n<xsl:stylesheet version=\"1.0\"\n xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\"\n xmlns:rt=\"http://xml.apache.org/xalan/java/java.lang.Runtime\"\n xmlns:ob=\"http://xml.apache.org/xalan/java/java.lang.Object\"\n exclude-result-prefixes= \"rt ob\">\n <xsl:template match=\"/\">\n   <xsl:variable name=\"runtimeObject\" select=\"rt:getRuntime()\"/>\n   <xsl:variable name=\"command\"\n     select=\"rt:exec($runtimeObject, &apos;c:\\Windows\\system32\\cmd.exe&apos;)\"/>\n   <xsl:variable name=\"commandAsString\" select=\"ob:toString($command)\"/>\n   <xsl:value-of select=\"$commandAsString\"/>\n </xsl:template>\n</xsl:stylesheet>\n\nRelated CVEs\n\n\nCVE-2013-1664\nUnrestricted entity expansion induces DoS vulnerabilities in Python\nXML libraries (XML bomb)\n\n\nCVE-2013-1665\nExternal entity expansion in Python XML libraries inflicts potential\nsecurity flaws and DoS vulnerabilities\n\n\nOther languages / frameworks\nSeveral other programming languages and frameworks are vulnerable as\nwell. A couple of them are affected by the fact that libxml2 up to 2.9.0\nhas no protection against quadratic blowup attacks. Most of them have\npotential dangerous default settings for entity expansion and external\nentities, too.\nPerl\nPerl's XML::Simple is vulnerable to quadratic entity expansion and\nexternal entity expansion (both local and remote).\nRuby\nRuby's REXML document parser is vulnerable to entity expansion attacks\n(both quadratic and exponential) but it doesn't do external entity\nexpansion by default. In order to counteract entity expansion you have\nto disable the feature:\nREXML::Document.entity_expansion_limit = 0\n\nlibxml-ruby and hpricot don't expand entities in their default\nconfiguration.\nPHP\nPHP's SimpleXML API is vulnerable to quadratic entity expansion and\nloads entities from local and remote resources. The option\nLIBXML_NONET disables network access but still allows local file\naccess. LIBXML_NOENT seems to have no effect on entity expansion in\nPHP 5.4.6.\nC# / .NET / Mono\nInformation in XML DoS and Defenses\n(MSDN) suggest\nthat .NET is vulnerable with its default settings. The article contains\ncode snippets how to create a secure XML reader:\nXmlReaderSettings settings = new XmlReaderSettings();\nsettings.ProhibitDtd = false;\nsettings.MaxCharactersFromEntities = 1024;\nsettings.XmlResolver = null;\nXmlReader reader = XmlReader.Create(stream, settings);\n\nJava\nUntested. The documentation of Xerces and its Xerces\nSecurityMananger\nsounds like Xerces is also vulnerable to billion laugh attacks with its\ndefault settings. It also does entity resolving when an\norg.xml.sax.EntityResolver is configured. I'm not yet sure about the\ndefault setting here.\nJava specialists suggest to have a custom builder factory:\nDocumentBuilderFactory builderFactory = DocumentBuilderFactory.newInstance();\nbuilderFactory.setXIncludeAware(False);\nbuilderFactory.setExpandEntityReferences(False);\nbuilderFactory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, True);\n# either\nbuilderFactory.setFeature(\"http://apache.org/xml/features/disallow-doctype-decl\", True);\n# or if you need DTDs\nbuilderFactory.setFeature(\"http://xml.org/sax/features/external-general-entities\", False);\nbuilderFactory.setFeature(\"http://xml.org/sax/features/external-parameter-entities\", False);\nbuilderFactory.setFeature(\"http://apache.org/xml/features/nonvalidating/load-external-dtd\", False);\nbuilderFactory.setFeature(\"http://apache.org/xml/features/nonvalidating/load-dtd-grammar\", False);\n\nTODO\n\nDOM: Use xml.dom.xmlbuilder options for entity handling\nSAX: take feature_external_ges and feature_external_pes (?) into\naccount\ntest experimental monkey patching of stdlib modules\nimprove documentation\n\nLicense\nCopyright (c) 2013-2017 by Christian Heimes <christian@python.org>\nLicensed to PSF under a Contributor Agreement.\nSee https://www.python.org/psf/license for licensing details.\nAcknowledgements\n\n\nBrett Cannon (Python Core developer)\nreview and code cleanup\n\n\nAntoine Pitrou (Python Core developer)\ncode review\n\n\nAaron Patterson, Ben Murphy and Michael Koziarski (Ruby community)\nMany thanks to Aaron, Ben and Michael from the Ruby community for\ntheir report and assistance.\n\n\nThierry Carrez (OpenStack)\nMany thanks to Thierry for his report to the Python Security\nResponse Team on behalf of the OpenStack security team.\n\n\nCarl Meyer (Django)\nMany thanks to Carl for his report to PSRT on behalf of the Django\nsecurity team.\n\n\nDaniel Veillard (libxml2)\nMany thanks to Daniel for his insight and assistance with libxml2.\n\n\nsemantics GmbH (https://www.semantics.de/)\nMany thanks to my employer semantics for letting me work on the\nissue during working hours as part of semantics's open source\ninitiative.\n\n\nReferences\n\nXML DoS and Defenses\n(MSDN)\nBillion Laughs on\nWikipedia\nZIP bomb on Wikipedia\nConfigure SAX parsers for secure\nprocessing\nTesting for XML\nInjection\n\nChangelog\ndefusedxml 0.8.0.dev1\n\nDrop support for Python 2.7, 3.4, and 3.5.\nAdd defusedxml.ElementTree.fromstringlist()\nFix regression defusedxml.ElementTree.ParseError (#63) The\nParseError exception is now the same class object as\nxml.etree.ElementTree.ParseError again.\n\ndefusedxml 0.7.0\nRelease date: 4-Mar-2021\n\nNo changes\n\ndefusedxml 0.7.0rc2\nRelease date: 12-Jan-2021\n\nRe-add and deprecate defusedxml.cElementTree\nUse GitHub Actions instead of TravisCI\nRestore ElementTree attribute of xml.etree module after patching\n\ndefusedxml 0.7.0rc1\nRelease date: 04-May-2020\n\nAdd support for Python 3.9\ndefusedxml.cElementTree is not available with Python 3.9.\nPython 2 is deprecate. Support for Python 2 will be removed in\n0.8.0.\n\ndefusedxml 0.6.0\nRelease date: 17-Apr-2019\n\nIncrease test coverage.\nAdd badges to README.\n\ndefusedxml 0.6.0rc1\nRelease date: 14-Apr-2019\n\nTest on Python 3.7 stable and 3.8-dev\nDrop support for Python 3.4\nNo longer pass html argument to XMLParse. It has been deprecated\nand ignored for a long time. The DefusedXMLParser still takes a html\nargument. A deprecation warning is issued when the argument is False\nand a TypeError when it's True.\ndefusedxml now fails early when pyexpat stdlib module is not\navailable or broken.\ndefusedxml.ElementTree.__all__ now lists ParseError as public\nattribute.\nThe defusedxml.ElementTree and defusedxml.cElementTree modules had a\ntypo and used XMLParse instead of XMLParser as an alias for\nDefusedXMLParser. Both the old and fixed name are now available.\n\ndefusedxml 0.5.0\nRelease date: 07-Feb-2017\n\nNo changes\n\ndefusedxml 0.5.0.rc1\nRelease date: 28-Jan-2017\n\nAdd compatibility with Python 3.6\nDrop support for Python 2.6, 3.1, 3.2, 3.3\nFix lxml tests (XMLSyntaxError: Detected an entity reference loop)\n\ndefusedxml 0.4.1\nRelease date: 28-Mar-2013\n\nAdd more demo exploits, e.g. python_external.py and Xalan XSLT\ndemos.\nImproved documentation.\n\ndefusedxml 0.4\nRelease date: 25-Feb-2013\n\nAs per http://seclists.org/oss-sec/2013/q1/340 please REJECT\nCVE-2013-0278, CVE-2013-0279 and CVE-2013-0280 and use\nCVE-2013-1664, CVE-2013-1665 for OpenStack/etc.\nAdd missing parser_list argument to sax.make_parser(). The\nargument is ignored, though. (thanks to Florian Apolloner)\nAdd demo exploit for external entity attack on Python's SAX parser,\nXML-RPC and WebDAV.\n\ndefusedxml 0.3\nRelease date: 19-Feb-2013\n\nImprove documentation\n\ndefusedxml 0.2\nRelease date: 15-Feb-2013\n\nRename ExternalEntitiesForbidden to ExternalReferenceForbidden\nRename defusedxml.lxml.check_dtd() to check_docinfo()\nUnify argument names in callbacks\nAdd arguments and formatted representation to exceptions\nAdd forbid_external argument to all functions and classes\nMore tests\nLOTS of documentation\nAdd example code for other languages (Ruby, Perl, PHP) and parsers\n(Genshi)\nAdd protection against XML and gzip attacks to xmlrpclib\n\ndefusedxml 0.1\nRelease date: 08-Feb-2013\n\nInitial and internal release for PSRT review\n\n\n\n", "description": "defusedxml: Aims to defuse XML bombs and other exploits, detailing various attack vectors and XML libraries."}, {"name": "decorator", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nDecorators for Humans\nInstallation\nTesting\nRepository\nDocumentation\nFor the impatient\n\n\n\n\n\nREADME.rst\n\n\n\n\nDecorators for Humans\nThe goal of the decorator module is to make it easy to define\nsignature-preserving function decorators and decorator factories.\nIt also includes an implementation of multiple dispatch and other niceties\n(please check the docs). It is released under a two-clauses\nBSD license, i.e. basically you can do whatever you want with it but I am not\nresponsible.\n\nInstallation\nIf you are lazy, just perform\n\n$ pip install decorator\nwhich will install just the module on your system.\nIf you prefer to install the full distribution from source, including\nthe documentation, clone the GitHub repo or download the tarball, unpack it and run\n\n$ pip install .\nin the main directory, possibly as superuser.\n\nTesting\nIf you have the source code installation you can run the tests with\n\n$ python src/tests/test.py -v\nor (if you have setuptools installed)\n\n$ python setup.py test\nNotice that you may run into trouble if in your system there\nis an older version of the decorator module; in such a case remove the\nold version. It is safe even to copy the module decorator.py over\nan existing one, since we kept backward-compatibility for a long time.\n\nRepository\nThe project is hosted on GitHub. You can look at the source here:\n\nhttps://github.com/micheles/decorator\n\nDocumentation\nThe documentation has been moved to https://github.com/micheles/decorator/blob/master/docs/documentation.md\nFrom there you can get a PDF version by simply using the print\nfunctionality of your browser.\nHere is the documentation for previous versions of the module:\nhttps://github.com/micheles/decorator/blob/4.3.2/docs/tests.documentation.rst\nhttps://github.com/micheles/decorator/blob/4.2.1/docs/tests.documentation.rst\nhttps://github.com/micheles/decorator/blob/4.1.2/docs/tests.documentation.rst\nhttps://github.com/micheles/decorator/blob/4.0.0/documentation.rst\nhttps://github.com/micheles/decorator/blob/3.4.2/documentation.rst\n\nFor the impatient\nHere is an example of how to define a family of decorators tracing slow\noperations:\nfrom decorator import decorator\n\n@decorator\ndef warn_slow(func, timelimit=60, *args, **kw):\n    t0 = time.time()\n    result = func(*args, **kw)\n    dt = time.time() - t0\n    if dt > timelimit:\n        logging.warning('%s took %d seconds', func.__name__, dt)\n    else:\n        logging.info('%s took %d seconds', func.__name__, dt)\n    return result\n\n@warn_slow  # warn if it takes more than 1 minute\ndef preprocess_input_files(inputdir, tempdir):\n    ...\n\n@warn_slow(timelimit=600)  # warn if it takes more than 10 minutes\ndef run_calculation(tempdir, outdir):\n    ...\nEnjoy!\n\n\n", "description": "Simplifies the usage of decorators."}, {"name": "debugpy", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ndebugpy - a debugger for Python\nCoverage\ndebugpy CLI Usage\nDebugging a script file\nDebugging a module\nAttaching to a running process by ID\nIgnoring subprocesses\ndebugpy Import usage\nEnabling debugging\nWaiting for the client to attach\nbreakpoint() function\nDebugger logging\n\n\n\n\n\nREADME.md\n\n\n\n\ndebugpy - a debugger for Python\nAn implementation of the Debug Adapter Protocol for Python 3.\n\n\n\n\nCoverage\n\n\n\nOS\nCoverage\n\n\n\n\nWindows\n\n\n\nLinux\n\n\n\nMac\n\n\n\n\ndebugpy CLI Usage\nFor full details, see the Command Line Reference.\nDebugging a script file\nTo run a script file with debugging enabled, but without waiting for the client to attach (i.e. code starts executing immediately):\n-m debugpy --listen localhost:5678 myfile.py\nTo wait until the client attaches before running your code, use the --wait-for-client switch.\n-m debugpy --listen localhost:5678 --wait-for-client myfile.py\nThe hostname passed to --listen specifies the interface on which the debug adapter will be listening for connections from DAP clients. It can be omitted, with only the port number specified:\n-m debugpy --listen 5678 ...\nin which case the default interface is 127.0.0.1.\nTo be able to attach from another machine, make sure that the adapter is listening on a public interface - using 0.0.0.0 will make it listen on all available interfaces:\n-m debugpy --listen 0.0.0.0:5678 myfile.py\nThis should only be done on secure networks, since anyone who can connect to the specified port can then execute arbitrary code within the debugged process.\nTo pass arguments to the script, just specify them after the filename. This works the same as with Python itself - everything up to  the filename is processed by debugpy, but everything after that becomes sys.argv of the running process.\nDebugging a module\nTo run a module, use the -m switch instead of filename:\n-m debugpy --listen localhost:5678 -m mymodule\nSame as with scripts, command line arguments can be passed to the module by specifying them after the module name. All other debugpy switches work identically in this mode; in particular, --wait-for-client can be used to block execution until the client attaches.\nAttaching to a running process by ID\nThe following command injects the debugger into a process with a given PID that is running Python code. Once the command returns, a debugpy server is running within the process, as if that process was launched via -m debugpy itself.\n-m debugpy --listen localhost:5678 --pid 12345\nIgnoring subprocesses\nThe following command will ignore subprocesses started by the debugged process.\n-m debugpy --listen localhost:5678 --pid 12345 --configure-subProcess False\ndebugpy Import usage\nFor full details, see the API reference.\nEnabling debugging\nAt the beginning of your script, import debugpy, and call debugpy.listen() to start the debug adapter, passing a (host, port) tuple as the first argument.\nimport debugpy\ndebugpy.listen((\"localhost\", 5678))\n...\nAs with the --listen command line switch, hostname can be omitted, and defaults to \"127.0.0.1\":\ndebugpy.listen(5678)\n...\nWaiting for the client to attach\nUse the debugpy.wait_for_client() function to block program execution until the client is attached.\nimport debugpy\ndebugpy.listen(5678)\ndebugpy.wait_for_client()  # blocks execution until client is attached\n...\nbreakpoint() function\nWhere available, debugpy supports the standard breakpoint() function for programmatic breakpoints. Use debugpy.breakpoint() function to get the same behavior when breakpoint() handler installed by debugpy is overridden by another handler. If the debugger is attached when either of these functions is invoked, it will pause execution on the calling line, as if it had a breakpoint set. If there's no client attached, the functions do nothing, and the code continues to execute normally.\nimport debugpy\ndebugpy.listen(...)\n\nwhile True:\n    ...\n    breakpoint()  # or debugpy.breakpoint()\n    ...\nDebugger logging\nTo enable debugger internal logging via CLI, the --log-to switch can be used:\n-m debugpy --log-to path/to/logs ...\nWhen using the API, the same can be done with debugpy.log_to():\ndebugpy.log_to('path/to/logs')\ndebugpy.listen(...)\nIn both cases, the environment variable DEBUGPY_LOG_DIR can also be set to the same effect.\nWhen logging is enabled, debugpy will create several log files with names matching debugpy*.log in the specified directory, corresponding to different components of the debugger. When subprocess debugging is enabled, separate logs are created for every subprocess.\n\n\n", "description": "debugpy is a debugger for Python, covering its usage, debugging scripts and modules, attaching processes, enabling debugging, and more.", "category": "Debugging"}, {"name": "databricks-sql-connector", "readme": "\nDatabricks SQL Connector for Python\n\n\nThe Databricks SQL Connector for Python allows you to develop Python applications that connect to Databricks clusters and SQL warehouses. It is a Thrift-based client with no dependencies on ODBC or JDBC. It conforms to the Python DB API 2.0 specification and exposes a SQLAlchemy dialect for use with tools like pandas and alembic which use SQLAlchemy to execute DDL.\nThis connector uses Arrow as the data-exchange format, and supports APIs to directly fetch Arrow tables. Arrow tables are wrapped in the ArrowQueue class to provide a natural API to get several rows at a time.\nYou are welcome to file an issue here for general use cases. You can also contact Databricks Support here.\nRequirements\nPython 3.7 or above is required.\nDocumentation\nFor the latest documentation, see\n\nDatabricks\nAzure Databricks\n\nQuickstart\nInstall the library with pip install databricks-sql-connector\nNote: Don't hard-code authentication secrets into your Python. Use environment variables\nexport DATABRICKS_HOST=********.databricks.com\nexport DATABRICKS_HTTP_PATH=/sql/1.0/endpoints/****************\nexport DATABRICKS_TOKEN=dapi********************************\n\nExample usage:\nimport os\nfrom databricks import sql\n\nhost = os.getenv(\"DATABRICKS_HOST\")\nhttp_path = os.getenv(\"DATABRICKS_HTTP_PATH\")\naccess_token = os.getenv(\"DATABRICKS_TOKEN\")\n\nconnection = sql.connect(\n  server_hostname=host,\n  http_path=http_path,\n  access_token=access_token)\n\ncursor = connection.cursor()\n\ncursor.execute('SELECT * FROM RANGE(10)')\nresult = cursor.fetchall()\nfor row in result:\n  print(row)\n\ncursor.close()\nconnection.close()\n\nIn the above example:\n\nserver-hostname is the Databricks instance host name.\nhttp-path is the HTTP Path either to a Databricks SQL endpoint (e.g. /sql/1.0/endpoints/1234567890abcdef),\nor to a Databricks Runtime interactive cluster (e.g. /sql/protocolv1/o/1234567890123456/1234-123456-slid123)\npersonal-access-token is the Databricks Personal Access Token for the account that will execute commands and queries\n\nContributing\nSee CONTRIBUTING.md\nLicense\nApache License 2.0\n"}, {"name": "Cython", "readme": "\nThe Cython language makes writing C extensions for the Python language as\neasy as Python itself.  Cython is a source code translator based on Pyrex,\nbut supports more cutting edge functionality and optimizations.\nThe Cython language is a superset of the Python language (almost all Python\ncode is also valid Cython code), but Cython additionally supports optional\nstatic typing to natively call C functions, operate with C++ classes and\ndeclare fast C types on variables and class attributes.  This allows the\ncompiler to generate very efficient C code from Cython code.\nThis makes Cython the ideal language for writing glue code for external\nC/C++ libraries, and for fast C modules that speed up the execution of\nPython code.\nNote that for one-time builds, e.g. for CI/testing, on platforms that are not\ncovered by one of the wheel packages provided on PyPI and the pure Python wheel\nthat we provide is not used, it is substantially faster than a full source build\nto install an uncompiled (slower) version of Cython with:\npip install Cython --install-option=\"--no-cython-compile\"\n", "description": "C extensions for Python."}, {"name": "cymem", "readme": "\n\n\n\n\n\n\n\n\n\n\n\ncymem: A Cython Memory Helper\nOverview\nInstallation\nExample Use Case: An array of structs\nCustom Allocators\n\n\n\n\n\nREADME.md\n\n\n\n\n\ncymem: A Cython Memory Helper\ncymem provides two small memory-management helpers for Cython. They make it easy\nto tie memory to a Python object's life-cycle, so that the memory is freed when\nthe object is garbage collected.\n\n\n\n\nOverview\nThe most useful is cymem.Pool, which acts as a thin wrapper around the calloc\nfunction:\nfrom cymem.cymem cimport Pool\ncdef Pool mem = Pool()\ndata1 = <int*>mem.alloc(10, sizeof(int))\ndata2 = <float*>mem.alloc(12, sizeof(float))\nThe Pool object saves the memory addresses internally, and frees them when the\nobject is garbage collected. Typically you'll attach the Pool to some cdef'd\nclass. This is particularly handy for deeply nested structs, which have\ncomplicated initialization functions. Just pass the Pool object into the\ninitializer, and you don't have to worry about freeing your struct at all \u2014 all\nof the calls to Pool.alloc will be automatically freed when the Pool\nexpires.\nInstallation\nInstallation is via pip, and requires\nCython. Before installing, make sure that your pip,\nsetuptools and wheel are up to date.\npip install -U pip setuptools wheel\npip install cymem\nExample Use Case: An array of structs\nLet's say we want a sequence of sparse matrices. We need fast access, and a\nPython list isn't performing well enough. So, we want a C-array or C++ vector,\nwhich means we need the sparse matrix to be a C-level struct \u2014 it can't be a\nPython class. We can write this easily enough in Cython:\n\"\"\"Example without Cymem\n\nTo use an array of structs, we must carefully walk the data structure when\nwe deallocate it.\n\"\"\"\n\nfrom libc.stdlib cimport calloc, free\n\ncdef struct SparseRow:\n    size_t length\n    size_t* indices\n    double* values\n\ncdef struct SparseMatrix:\n    size_t length\n    SparseRow* rows\n\ncdef class MatrixArray:\n    cdef size_t length\n    cdef SparseMatrix** matrices\n\n    def __cinit__(self, list py_matrices):\n        self.length = 0\n        self.matrices = NULL\n\n    def __init__(self, list py_matrices):\n        self.length = len(py_matrices)\n        self.matrices = <SparseMatrix**>calloc(len(py_matrices), sizeof(SparseMatrix*))\n\n        for i, py_matrix in enumerate(py_matrices):\n            self.matrices[i] = sparse_matrix_init(py_matrix)\n\n    def __dealloc__(self):\n        for i in range(self.length):\n            sparse_matrix_free(self.matrices[i])\n        free(self.matrices)\n\n\ncdef SparseMatrix* sparse_matrix_init(list py_matrix) except NULL:\n    sm = <SparseMatrix*>calloc(1, sizeof(SparseMatrix))\n    sm.length = len(py_matrix)\n    sm.rows = <SparseRow*>calloc(sm.length, sizeof(SparseRow))\n    cdef size_t i, j\n    cdef dict py_row\n    cdef size_t idx\n    cdef double value\n    for i, py_row in enumerate(py_matrix):\n        sm.rows[i].length = len(py_row)\n        sm.rows[i].indices = <size_t*>calloc(sm.rows[i].length, sizeof(size_t))\n        sm.rows[i].values = <double*>calloc(sm.rows[i].length, sizeof(double))\n        for j, (idx, value) in enumerate(py_row.items()):\n            sm.rows[i].indices[j] = idx\n            sm.rows[i].values[j] = value\n    return sm\n\n\ncdef void* sparse_matrix_free(SparseMatrix* sm) except *:\n    cdef size_t i\n    for i in range(sm.length):\n        free(sm.rows[i].indices)\n        free(sm.rows[i].values)\n    free(sm.rows)\n    free(sm)\nWe wrap the data structure in a Python ref-counted class at as low a level as we\ncan, given our performance constraints. This allows us to allocate and free the\nmemory in the __cinit__ and __dealloc__ Cython special methods.\nHowever, it's very easy to make mistakes when writing the __dealloc__ and\nsparse_matrix_free functions, leading to memory leaks. cymem prevents you from\nwriting these deallocators at all. Instead, you write as follows:\n\"\"\"Example with Cymem.\n\nMemory allocation is hidden behind the Pool class, which remembers the\naddresses it gives out.  When the Pool object is garbage collected, all of\nits addresses are freed.\n\nWe don't need to write MatrixArray.__dealloc__ or sparse_matrix_free,\neliminating a common class of bugs.\n\"\"\"\nfrom cymem.cymem cimport Pool\n\ncdef struct SparseRow:\n    size_t length\n    size_t* indices\n    double* values\n\ncdef struct SparseMatrix:\n    size_t length\n    SparseRow* rows\n\n\ncdef class MatrixArray:\n    cdef size_t length\n    cdef SparseMatrix** matrices\n    cdef Pool mem\n\n    def __cinit__(self, list py_matrices):\n        self.mem = None\n        self.length = 0\n        self.matrices = NULL\n\n    def __init__(self, list py_matrices):\n        self.mem = Pool()\n        self.length = len(py_matrices)\n        self.matrices = <SparseMatrix**>self.mem.alloc(self.length, sizeof(SparseMatrix*))\n        for i, py_matrix in enumerate(py_matrices):\n            self.matrices[i] = sparse_matrix_init(self.mem, py_matrix)\n\ncdef SparseMatrix* sparse_matrix_init_cymem(Pool mem, list py_matrix) except NULL:\n    sm = <SparseMatrix*>mem.alloc(1, sizeof(SparseMatrix))\n    sm.length = len(py_matrix)\n    sm.rows = <SparseRow*>mem.alloc(sm.length, sizeof(SparseRow))\n    cdef size_t i, j\n    cdef dict py_row\n    cdef size_t idx\n    cdef double value\n    for i, py_row in enumerate(py_matrix):\n        sm.rows[i].length = len(py_row)\n        sm.rows[i].indices = <size_t*>mem.alloc(sm.rows[i].length, sizeof(size_t))\n        sm.rows[i].values = <double*>mem.alloc(sm.rows[i].length, sizeof(double))\n        for j, (idx, value) in enumerate(py_row.items()):\n            sm.rows[i].indices[j] = idx\n            sm.rows[i].values[j] = value\n    return sm\nAll that the Pool class does is remember the addresses it gives out. When the\nMatrixArray object is garbage-collected, the Pool object will also be\ngarbage collected, which triggers a call to Pool.__dealloc__. The Pool then\nfrees all of its addresses. This saves you from walking back over your nested\ndata structures to free them, eliminating a common class of errors.\nCustom Allocators\nSometimes external C libraries use private functions to allocate and free\nobjects, but we'd still like the laziness of the Pool.\nfrom cymem.cymem cimport Pool, WrapMalloc, WrapFree\ncdef Pool mem = Pool(WrapMalloc(priv_malloc), WrapFree(priv_free))\n\n\n", "description": "Memory allocation helpers for Cython to tie memory to Python object lifetime"}, {"name": "cycler", "readme": "\n\n\n\nREADME.rst\n\n\n\n\n \n |Travis|_ \n\ncycler: composable cycles\nDocs: https://matplotlib.org/cycler/\n\n\n", "description": "Composable style cycles."}, {"name": "cssselect2", "readme": "\ncssselect2 is a straightforward implementation of CSS4 Selectors for markup\ndocuments (HTML, XML, etc.) that can be read by ElementTree-like parsers\n(including cElementTree, lxml, html5lib, etc.)\n\nFree software: BSD license\nFor Python 3.7+, tested on CPython and PyPy\nDocumentation: https://doc.courtbouillon.org/cssselect2\nChangelog: https://github.com/Kozea/cssselect2/releases\nCode, issues, tests: https://github.com/Kozea/cssselect2\nCode of conduct: https://www.courtbouillon.org/code-of-conduct.html\nProfessional support: https://www.courtbouillon.org\nDonation: https://opencollective.com/courtbouillon\n\ncssselect2 has been created and developed by Kozea (https://kozea.fr/).\nProfessional support, maintenance and community management is provided by\nCourtBouillon (https://www.courtbouillon.org/).\nCopyrights are retained by their contributors, no copyright assignment is\nrequired to contribute to cssselect2. Unless explicitly stated otherwise, any\ncontribution intentionally submitted for inclusion is licensed under the BSD\n3-clause license, without any additional terms or conditions. For full\nauthorship information, see the version control history.\n", "description": "CSS selector library."}, {"name": "cryptography", "readme": "\n\n\n\n\n\n\n\n\n\n\n\npyca/cryptography\nDiscussion\nSecurity\n\n\n\n\n\nREADME.rst\n\n\n\n\npyca/cryptography\n\n\n\n\ncryptography is a package which provides cryptographic recipes and\nprimitives to Python developers. Our goal is for it to be your \"cryptographic\nstandard library\". It supports Python 3.7+ and PyPy3 7.3.11+.\ncryptography includes both high level recipes and low level interfaces to\ncommon cryptographic algorithms such as symmetric ciphers, message digests, and\nkey derivation functions. For example, to encrypt something with\ncryptography's high level symmetric encryption recipe:\n>>> from cryptography.fernet import Fernet\n>>> # Put this somewhere safe!\n>>> key = Fernet.generate_key()\n>>> f = Fernet(key)\n>>> token = f.encrypt(b\"A really secret message. Not for prying eyes.\")\n>>> token\nb'...'\n>>> f.decrypt(token)\nb'A really secret message. Not for prying eyes.'\nYou can find more information in the documentation.\nYou can install cryptography with:\n$ pip install cryptography\nFor full details see the installation documentation.\n\nDiscussion\nIf you run into bugs, you can file them in our issue tracker.\nWe maintain a cryptography-dev mailing list for development discussion.\nYou can also join #pyca on irc.libera.chat to ask questions or get\ninvolved.\n\nSecurity\nNeed to report a security issue? Please consult our security reporting\ndocumentation.\n\n\n", "description": "Cryptography and SSL/TLS library.", "category": "Cryptography"}, {"name": "countryinfo", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nCountry Info\nTable of Contents\nAPIs\nAcknowledgement\nInstall\nAPI Usage\n.info()\n.provinces()\n.alt_spellings()\n.area()\n.borders()\n.calling_codes()\n.capital()\n.capital_latlng()\n.currencies()\n.demonym()\n.geo_json()\n.iso()\n.languages()\n.latlng()\n.native_name()\n.population()\n.region()\n.subregion()\n.timezones()\n.tld()\n.translations()\n.wiki()\n.google()\n.all()\nSpecial Thanks\nInspired By\nContributing\nHow to become a contributor\nHow to make a clean pull request\nDisclaimer\nLicense\nThe MIT License\n\n\n\n\n\nREADME.md\n\n\n\n\nCountry Info\nA python module for returning data about countries, ISO info and states/provinces within them.\nTable of Contents\n\nInstall\nAPI Usage\n\nAPIs\n\n.info()\n.provinces()\n.alt_spellings()\n.area()\n.borders()\n.calling_codes()\n.capital()\n.capital_latlng()\n.currencies()\n.demonym()\n.geojson()`\n.iso()\n.languages()\n.latlng()\n.native_name()\n.population()\n.region()\n.subregion()\n.timezones()\n.tld()\n.translations()\n.wiki()\n.google()\n.all()\n\nAcknowledgement\n\nSpecial Thanks\nContributing\nChangelog\nDisclaimer\nLicense (MIT)\n\nInstall\npip install countryinfo\nOR, git clone\ngit clone https://github.com/porimol/countryinfo.git\n\ncd countryinfo\npython setup.py install\nAPI Usage\nTo access one of the country properties available, you'll need to use one of the API methods listed below and pass a country in either way.\n.info()\nReturns all available information for a specified country.\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.info()\n# returns object,\n{\n    'ISO': {\n        'alpha2': 'SG',\n        'alpha3': 'SGP'\n    },\n    'altSpellings': [\n        'SG',\n        'Singapura',\n        'Republik Singapura',\n        '\u65b0\u52a0\u5761\u5171\u548c\u56fd'\n    ],\n    'area': 710,\n    'borders': [],\n    'callingCodes': ['65'],\n    'capital': 'Singapore',\n    'capital_latlng': [\n        1.357107,\n        103.819499\n    ],\n    'currencies': ['SGD'],\n    'demonym': 'Singaporean',\n    'flag': '',\n    'geoJSON': {},\n    'languages': [\n        'en',\n        'ms',\n        'ta',\n        'zh'\n    ],\n    'latlng': [\n        1.36666666,\n        103.8\n    ],\n    'name': 'Singapore',\n    'nativeName': 'Singapore',\n    'population': 5469700,\n    'provinces': ['Singapore'],\n    'region': 'Asia',\n    'subregion': 'South-Eastern Asia',\n    'timezones': ['UTC+08:00'],\n    'tld': ['.sg'],\n    'translations': {\n        'de': 'Singapur',\n        'es': 'Singapur',\n        'fr': 'Singapour',\n        'it': 'Singapore',\n        'ja': '\u30b7\u30f3\u30ac\u30dd\u30fc\u30eb'\n    },\n    'wiki': 'http://en.wikipedia.org/wiki/singapore',\n    'google': 'https://www.google.com/search?q=Singapore'\n}\n\n# Similar can also be achieved via country code or any\n# alternate name of a country. For example, Singapur\n# would be:\ncountry = CountryInfo('SG')\n.provinces()\nReturn provinces list\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.provinces()\n# returns object,\n['Singapore']\n.alt_spellings()\nReturns alternate spellings for the name of a specified country\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.alt_spellings()\n# returns list of strings, alternate names\n# ['SG', 'Singapura', 'Republik Singapura', '\u65b0\u52a0\u5761\u5171\u548c\u56fd']\n.area()\nReturns area (km\u00b2) for a specified country\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.area()\n# returns number of square kilometer area\n710\n.borders()\nReturns bordering countries (ISO3) for a specified country\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.borders()\n# returns array of strings, ISO3 codes of countries that border the given country\n[]\n.calling_codes()\nReturns international calling codes for a specified country\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.calling_codes()\n# returns array of calling code strings\n['65']\n.capital()\nReturns capital city for a specified country\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.capital()\n# returns string\n'Singapore'\n.capital_latlng()\nReturns capital city latitude and longitude for a specified country\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.capital_latlng()\n# returns array, approx latitude and longitude for country capital\n[1.357107, 103.819499]\n.currencies()\nReturns official currencies for a specified country\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.currencies()\n# returns array of strings, currencies\n# ['SGD']\n.demonym()\nReturns the demonyms for a specified country\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.demonym()\n# returns string, name of residents\n'Singaporean'\n.geo_json()\nReturns geoJSON for a specified country\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Bangladesh')\ncountry.geo_json()\n# returns object of GeoJSON data\n\n{\n    'features': [\n        {\n            'geometry': {\n                'coordinates': [[[92.672721, 22.041239],\n                                             [92.652257, 21.324048],\n                                             [92.303234, 21.475485],\n                                             [92.368554, 20.670883],\n                                             [92.082886, 21.192195],\n                                             [92.025215, 21.70157],\n                                             [91.834891, 22.182936],\n                                             [91.417087, 22.765019],\n                                             [90.496006, 22.805017],\n                                             [90.586957, 22.392794],\n                                             [90.272971, 21.836368],\n                                             [89.847467, 22.039146],\n                                             [89.70205, 21.857116],\n                                             [89.418863, 21.966179],\n                                             [89.031961, 22.055708],\n                                             [88.876312, 22.879146],\n                                             [88.52977, 23.631142],\n                                             [88.69994, 24.233715],\n                                             [88.084422, 24.501657],\n                                             [88.306373, 24.866079],\n                                             [88.931554, 25.238692],\n                                             [88.209789, 25.768066],\n                                             [88.563049, 26.446526],\n                                             [89.355094, 26.014407],\n                                             [89.832481, 25.965082],\n                                             [89.920693, 25.26975],\n                                             [90.872211, 25.132601],\n                                             [91.799596, 25.147432],\n                                             [92.376202, 24.976693],\n                                             [91.915093, 24.130414],\n                                             [91.46773, 24.072639],\n                                             [91.158963, 23.503527],\n                                             [91.706475, 22.985264],\n                                             [91.869928, 23.624346],\n                                             [92.146035, 23.627499],\n                                             [92.672721, 22.041239]]],\n                            'type': 'Polygon'\n                },\n               'id': 'BGD',\n               'properties': {'name': 'Bangladesh'},\n               'type': 'Feature'}],\n    'type': 'FeatureCollection'\n}\n.iso()\nReturns ISO codes for a specified country\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.iso()\n# returns object of ISO codes\n{'alpha2': 'SG', 'alpha3': 'SGP'}\n\ncountry.iso(2)\n# returns object of ISO codes\n'SG'\n\n\ncountry.iso(3)\n# returns object of ISO codes\n'SGP'\n.languages()\nReturns official languages for a specified country\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.languages()\n# returns array of language codes\n['en', 'ms', 'ta', 'zh']\n.latlng()\nReturns approx latitude and longitude for a specified country\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.latlng()\n# returns array, approx latitude and longitude for country\n[1.36666666, 103.8]\n.native_name()\nReturns the name of the country in its native tongue\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.native_name()\n# returns string, name of country in native language\n'Singapore'\n.population()\nReturns approximate population for a specified country\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.population()\n# returns number, approx population\n5469700\n.region()\nReturns general region for a specified country\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.region()\n# returns string\n'Asia'\n.subregion()\nReturns a more specific region for a specified country\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.subregion()\n# returns string\n'South-Eastern Asia'\n.timezones()\nReturns all timezones for a specified country\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.timezones()\n# returns array of timezones\n['UTC+08:00']\n.tld()\nReturns official top level domains for a specified country\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.tld()\n# returns array of top level domains specific to the country\n['.sg']\n.translations()\nReturns translations for a specified country name in popular languages\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.translations()\n# returns object of translations of country name in major languages\n{\n    'de': 'Singapur',\n    'es': 'Singapur',\n    'fr': 'Singapour',\n    'it': 'Singapore',\n    'ja': '\u30b7\u30f3\u30ac\u30dd\u30fc\u30eb'\n}\n.wiki()\nReturns link to wikipedia page for a specified country\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.wiki()\n# returns string URL of wikipedia article on country\n'http://en.wikipedia.org/wiki/singapore'\n.google()\nReturns link to google page for a specified country\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo('Singapore')\ncountry.google()\n# returns string URL of google page on country\n'https://www.google.com/search?q=Singapore'\n.all()\nReturns array of objects containing all available data for all countries. This will be super big. Not recommended.\n# coding=utf-8\nfrom countryinfo import CountryInfo\n\n\ncountry = CountryInfo()\ncountry.all()\n# returns array of objects,\n{\n    'zimbabwe': {'ISO': {'alpha2': 'ZW', 'alpha3': 'ZWE'},\n              'altSpellings': ['ZW', 'Republic of Zimbabwe'],\n              'area': 390757,\n              'borders': ['BWA', 'MOZ', 'ZAF', 'ZMB'],\n              'callingCodes': ['263'],\n              'capital': 'Harare',\n              'capital_latlng': [-17.831773, 31.045686],\n              'currencies': ['USD'],\n              'demonym': 'Zimbabwean',\n              'flag': '',\n              'geoJSON': {'features': [{'geometry': {'coordinates': [[[31.191409,\n                                                                       -22.25151],\n                                                                      [30.659865,\n                                                                       -22.151567],\n                                                                      [30.322883,\n                                                                       -22.271612],\n                                                                      [29.839037,\n                                                                       -22.102216],\n                                                                      [29.432188,\n                                                                       -22.091313],\n                                                                      [28.794656,\n                                                                       -21.639454],\n                                                                      [28.02137,\n                                                                       -21.485975],\n                                                                      [27.727228,\n                                                                       -20.851802],\n                                                                      [27.724747,\n                                                                       -20.499059],\n                                                                      [27.296505,\n                                                                       -20.39152],\n                                                                      [26.164791,\n                                                                       -19.293086],\n                                                                      [25.850391,\n                                                                       -18.714413],\n                                                                      [25.649163,\n                                                                       -18.536026],\n                                                                      [25.264226,\n                                                                       -17.73654],\n                                                                      [26.381935,\n                                                                       -17.846042],\n                                                                      [26.706773,\n                                                                       -17.961229],\n                                                                      [27.044427,\n                                                                       -17.938026],\n                                                                      [27.598243,\n                                                                       -17.290831],\n                                                                      [28.467906,\n                                                                       -16.4684],\n                                                                      [28.825869,\n                                                                       -16.389749],\n                                                                      [28.947463,\n                                                                       -16.043051],\n                                                                      [29.516834,\n                                                                       -15.644678],\n                                                                      [30.274256,\n                                                                       -15.507787],\n                                                                      [30.338955,\n                                                                       -15.880839],\n                                                                      [31.173064,\n                                                                       -15.860944],\n                                                                      [31.636498,\n                                                                       -16.07199],\n                                                                      [31.852041,\n                                                                       -16.319417],\n                                                                      [32.328239,\n                                                                       -16.392074],\n                                                                      [32.847639,\n                                                                       -16.713398],\n                                                                      [32.849861,\n                                                                       -17.979057],\n                                                                      [32.654886,\n                                                                       -18.67209],\n                                                                      [32.611994,\n                                                                       -19.419383],\n                                                                      [32.772708,\n                                                                       -19.715592],\n                                                                      [32.659743,\n                                                                       -20.30429],\n                                                                      [32.508693,\n                                                                       -20.395292],\n                                                                      [32.244988,\n                                                                       -21.116489],\n                                                                      [31.191409,\n                                                                       -22.25151]]],\n                                                     'type': 'Polygon'},\n                                        'id': 'ZWE',\n                                        'properties': {'name': 'Zimbabwe'},\n                                        'type': 'Feature'}],\n                          'type': 'FeatureCollection'},\n              'languages': ['en', 'sn', 'nd'],\n              'latlng': [-20, 30],\n              'name': 'Zimbabwe',\n              'nativeName': 'Zimbabwe',\n              'population': 13061239,\n              'provinces': ['Bulawayo',\n                            'Harare',\n                            'ManicalandMashonaland Central',\n                            'Mashonaland East',\n                            'Mashonaland'],\n              'region': 'Africa',\n              'subregion': 'Eastern Africa',\n              'timezones': ['UTC+02:00'],\n              'tld': ['.zw'],\n              'translations': {'de': 'Simbabwe',\n                               'es': 'Zimbabue',\n                               'fr': 'Zimbabwe',\n                               'it': 'Zimbabwe',\n                               'ja': '\u30b8\u30f3\u30d0\u30d6\u30a8'},\n              'wiki': 'http://en.wikipedia.org/wiki/zimbabwe',\n              'google': 'https://www.google.com/search?q=Zimbabwe'}\n}\nSpecial Thanks\nSpecial thanks to johan for his work on johan/world.geo.json, who made the geojson portion of this build possible.\nInspired By\nRepo: countryjs\nMaintainer: Oz Haven\nContributing\nSee the list of contributors who participated in this project.\nHow to become a contributor\nIf you want to contribute to countryinfo and make it better, your help is very welcome.\nYou can make constructive, helpful bug reports, feature requests and the noblest of all contributions.\nIf like to contribute in a good way, then follow the following guidelines.\nHow to make a clean pull request\n\nCreate a personal fork on Github.\nClone the fork on your local machine.(Your remote repo on Github is called origin.)\nAdd the original repository as a remote called upstream.\nIf you created your fork a while ago be sure to pull upstream changes into your local repository.\nCreate a new branch to work on! Branch from dev.\nImplement/fix your feature, comment your code.\nFollow countryinfo's code style, including indentation(4 spaces).\nWrite or adapt tests as needed.\nAdd or change the documentation as needed.\nPush your branch to your fork on Github, the remote origin.\nFrom your fork open a pull request to the dev branch.\nOnce the pull request is approved and merged, please pull the changes from upstream to your local repo and delete your extra branch(es).\n\nDisclaimer\nThis is being maintained in the contributor's free time, and as such, may contain minor errors in regards to some countries.\nMost of the information included in this library is what is listed on Wikipedia. If there is an error,\nplease let me know and I will do my best to correct it.\nLicense\nThe MIT License\nCopyright (c) 2018, Porimol Chandro porimolchandroroy@gmail.com\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE.\n\n\n", "description": "Get country info like area, population, currencies etc."}, {"name": "compressed-rtf", "readme": "\n\n\n\n\n\n\n\n\n\n\n\ncompressed_rtf\nDescription:\nUsage example:\nLicense:\n\n\n\n\n\nREADME.md\n\n\n\n\ncompressed_rtf\n\n\n\n\nCompressed Rich Text Format (RTF) compression worker in Python\nDescription:\nCompressed RTF also known as \"LZFu\" compression format\nBased on Rich Text Format (RTF) Compression Algorithm:\nhttps://msdn.microsoft.com/en-us/library/cc463890(v=exchg.80).aspx\nUsage example:\n>>> from compressed_rtf import compress, decompress\n>>>\n>>> data = '{\\\\rtf1\\\\ansi\\\\ansicpg1252\\\\pard test}'\n>>> comp = compress(data, compressed=True)  # compressed\n>>> comp\n'#\\x00\\x00\\x00\"\\x00\\x00\\x00LZFu3\\\\\\xe8t\\x03\\x00\\n\\x00rcpg125\\x922\\n\\xf3 t\\x07\\x90t}\\x0f\\x10'\n>>>\n>>> raw = compress(data, compressed=False)  # raw/uncompressed\n>>> raw\n'.\\x00\\x00\\x00\"\\x00\\x00\\x00MELA \\xdf\\x12\\xce{\\\\rtf1\\\\ansi\\\\ansicpg1252\\\\pard test}'\n>>>\n>>> decompress(comp)\n'{\\\\rtf1\\\\ansi\\\\ansicpg1252\\\\pard test}'\n>>>\n>>> decompress(raw)\n'{\\\\rtf1\\\\ansi\\\\ansicpg1252\\\\pard test}'\n>>>\nLicense:\nReleased under The MIT License.\n\n\n"}, {"name": "comm", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nComm\nRegister a comm implementation in the kernel:\nCase 1: Using the default CommManager and the BaseComm implementations\nCase 2: Providing your own comm manager creation implementation\nComm users\n\n\n\n\n\nREADME.md\n\n\n\n\nComm\nIt provides a way to register a Kernel Comm implementation, as per the Jupyter kernel protocol.\nIt also provides a base Comm implementation and a default CommManager that can be used.\nRegister a comm implementation in the kernel:\nCase 1: Using the default CommManager and the BaseComm implementations\nWe provide default implementations for usage in IPython:\nimport comm\n\n\nclass MyCustomComm(comm.base_comm.BaseComm):\n\n    def publish_msg(self, msg_type, data=None, metadata=None, buffers=None, **keys):\n        # TODO implement the logic for sending comm messages through the iopub channel\n        pass\n\n\ncomm.create_comm = MyCustomComm\nThis is typically what ipykernel and JupyterLite's pyolite kernel will do.\nCase 2: Providing your own comm manager creation implementation\nimport comm\n\ncomm.create_comm = custom_create_comm\ncomm.get_comm_manager = custom_comm_manager_getter\nThis is typically what xeus-python does (it has its own manager implementation using xeus's C++ messaging logic).\nComm users\nLibraries like ipywidgets can then use the comms implementation that has been registered by the kernel:\nfrom comm import create_comm, get_comm_manager\n\n# Create a comm\ncomm_manager = get_comm_manager()\ncomm = create_comm()\n\ncomm_manager.register_comm(comm)\n\n\n", "description": "Register comm implementations for Jupyter kernel communication"}, {"name": "cmudict", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nCMUdict: Python wrapper for cmudict\nInstallation\nUsage\nCredits\n\n\n\n\n\nREADME.md\n\n\n\n\nCMUdict: Python wrapper for cmudict\n\n\nCMUdict is a versioned python wrapper package for\nThe CMU Pronouncing Dictionary data\nfiles. The main purpose is to expose the data with little or no assumption on\nhow it is to be used.\nInstallation\ncmudict is available on PyPI. Simply install it with pip:\npip install cmudict\nUsage\nThe cmudict data set includes 4 data files: cmudict.dict, cmudict.phones,\ncmudict.symbols, and cmudict.vp. See\nThe CMU Pronouncing Dictionary for\ndetails on the data. Chances are, if you're here, you already know what's in the\nfiles.\nEach file can be accessed through three functions, one which returns the raw\n(string) contents, one which returns a binary stream of the file, and one which\ndoes minimal processing of the file into an appropriate structure:\n>>> import cmudict\n\n>>> cmudict.dict() # Compatible with NLTK\n>>> cmudict.dict_string()\n>>> cmudict.dict_stream()\n\n>>> cmudict.phones()\n>>> cmudict.phones_string()\n>>> cmudict.phones_stream()\n\n>>> cmudict.symbols()\n>>> cmudict.symbols_string()\n>>> cmudict.symbols_stream()\n\n>>> cmudict.vp()\n>>> cmudict.vp_string()\n>>> cmudict.vp_stream()\nThree additional functions are included to maintain compatibility with NLTK:\ncmudict.entries(), cmudict.raw(), and cmudict.words(). See the\nnltk.corpus.reader.cmudict\ndocumentation for details:\n>>> cmudict.entries() # Compatible with NLTK\n>>> cmudict.raw() # Compatible with NLTK\n>>> cmudict.words() # Compatible with NTLK\nAnd finally, the license for the cmudict data set is available as well:\n>>> cmudict.license_string() # Returns the cmudict license as a string\nCredits\nBuilt on or modeled after the following open source projects:\n\nThe CMU Pronouncing Dictionary\nNLTK\n\n\n\n", "description": "Access CMU Pronouncing Dictionary for pronunciation data"}, {"name": "cloudpickle", "readme": "\n\n\n\n\n\n\n\n\n\n\n\ncloudpickle\nInstallation\nExamples\nOverriding pickle's serialization mechanism for importable constructs:\nRunning the tests\nHistory\n\n\n\n\n\nREADME.md\n\n\n\n\ncloudpickle\n\n\ncloudpickle makes it possible to serialize Python constructs not supported\nby the default pickle module from the Python standard library.\ncloudpickle is especially useful for cluster computing where Python\ncode is shipped over the network to execute on remote hosts, possibly close\nto the data.\nAmong other things, cloudpickle supports pickling for lambda functions\nalong with functions and classes defined interactively in the\n__main__ module (for instance in a script, a shell or a Jupyter notebook).\nCloudpickle can only be used to send objects between the exact same version\nof Python.\nUsing cloudpickle for long-term object storage is not supported and\nstrongly discouraged.\nSecurity notice: one should only load pickle data from trusted sources as\notherwise pickle.load can lead to arbitrary code execution resulting in a critical\nsecurity vulnerability.\nInstallation\nThe latest release of cloudpickle is available from\npypi:\npip install cloudpickle\n\nExamples\nPickling a lambda expression:\n>>> import cloudpickle\n>>> squared = lambda x: x ** 2\n>>> pickled_lambda = cloudpickle.dumps(squared)\n\n>>> import pickle\n>>> new_squared = pickle.loads(pickled_lambda)\n>>> new_squared(2)\n4\nPickling a function interactively defined in a Python shell session\n(in the __main__ module):\n>>> CONSTANT = 42\n>>> def my_function(data: int) -> int:\n...     return data + CONSTANT\n...\n>>> pickled_function = cloudpickle.dumps(my_function)\n>>> depickled_function = pickle.loads(pickled_function)\n>>> depickled_function\n<function __main__.my_function(data:int) -> int>\n>>> depickled_function(43)\n85\nOverriding pickle's serialization mechanism for importable constructs:\nAn important difference between cloudpickle and pickle is that\ncloudpickle can serialize a function or class by value, whereas pickle\ncan only serialize it by reference. Serialization by reference treats\nfunctions and classes as attributes of modules, and pickles them through\ninstructions that trigger the import of their module at load time.\nSerialization by reference is thus limited in that it assumes that the module\ncontaining the function or class is available/importable in the unpickling\nenvironment. This assumption breaks when pickling constructs defined in an\ninteractive session, a case that is automatically detected by cloudpickle,\nthat pickles such constructs by value.\nAnother case where the importability assumption is expected to break is when\ndeveloping a module in a distributed execution environment: the worker\nprocesses may not have access to the said module, for example if they live on a\ndifferent machine than the process in which the module is being developed.\nBy itself, cloudpickle cannot detect such \"locally importable\" modules and\nswitch to serialization by value; instead, it relies on its default mode,\nwhich is serialization by reference. However, since cloudpickle 2.0.0, one\ncan explicitly specify modules for which serialization by value should be used,\nusing the register_pickle_by_value(module)//unregister_pickle(module) API:\n>>> import cloudpickle\n>>> import my_module\n>>> cloudpickle.register_pickle_by_value(my_module)\n>>> cloudpickle.dumps(my_module.my_function)  # my_function is pickled by value\n>>> cloudpickle.unregister_pickle_by_value(my_module)\n>>> cloudpickle.dumps(my_module.my_function)  # my_function is pickled by reference\nUsing this API, there is no need to re-install the new version of the module on\nall the worker nodes nor to restart the workers: restarting the client Python\nprocess with the new source code is enough.\nNote that this feature is still experimental, and may fail in the following\nsituations:\n\n\nIf the body of a function/class pickled by value contains an import statement:\n>>> def f():\n>>> ... from another_module import g\n>>> ... # calling f in the unpickling environment may fail if another_module\n>>> ... # is unavailable\n>>> ... return g() + 1\n\n\nIf a function pickled by reference uses a function pickled by value during its execution.\n\n\nRunning the tests\n\n\nWith tox, to test run the tests for all the supported versions of\nPython and PyPy:\npip install tox\ntox\n\nor alternatively for a specific environment:\ntox -e py37\n\n\n\nWith py.test to only run the tests for your current version of\nPython:\npip install -r dev-requirements.txt\nPYTHONPATH='.:tests' py.test\n\n\n\nHistory\ncloudpickle was initially developed by picloud.com and shipped as part of\nthe client SDK.\nA copy of cloudpickle.py was included as part of PySpark, the Python\ninterface to Apache Spark. Davies Liu, Josh\nRosen, Thom Neale and other Apache Spark developers improved it significantly,\nmost notably to add support for PyPy and Python 3.\nThe aim of the cloudpickle project is to make that work available to a wider\naudience outside of the Spark ecosystem and to make it easier to improve it\nfurther notably with the help of a dedicated non-regression test suite.\n\n\n", "description": "Extended pickling support for Python objects."}, {"name": "cligj", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ncligj\nArguments\nOptions\nJSON formatting options\nCoordinate precision option\nGeographic (default), projected, or Mercator switch\nFeature collection or feature sequence switch\nGeoJSON output mode option\nExample\n\n\n\n\n\nREADME.rst\n\n\n\n\ncligj\n\n\nCommon arguments and options for GeoJSON processing commands, using Click.\ncligj is for Python developers who create command line interfaces for geospatial data.\ncligj allows you to quickly build consistent, well-tested and interoperable CLIs for handling GeoJSON.\n\nArguments\nfiles_in_arg\nMultiple files\nfiles_inout_arg\nMultiple files, last of which is an output file.\nfeatures_in_arg\nGeoJSON Features input which accepts multiple representations of GeoJSON features\nand returns the input data as an iterable of GeoJSON Feature-like dictionaries\n\nOptions\nverbose_opt\nquiet_opt\nformat_opt\n\nJSON formatting options\nindent_opt\ncompact_opt\n\nCoordinate precision option\nprecision_opt\n\nGeographic (default), projected, or Mercator switch\nprojection_geographic_opt\nprojection_projected_opt\nprojection_mercator_opt\n\nFeature collection or feature sequence switch\nsequence_opt\nuse_rs_opt\n\nGeoJSON output mode option\ngeojson_type_collection_opt\ngeojson_type_feature_opt\ndef geojson_type_bbox_opt\n\nExample\nHere's an example of a command that writes out GeoJSON features as a collection\nor, optionally, a sequence of individual features. Since most software that\nreads and writes GeoJSON expects a text containing a single feature collection,\nthat's the default, and a LF-delimited sequence of texts containing one GeoJSON\nfeature each is a feature that is turned on using the --sequence option.\nTo write sequences of feature texts that conform to the GeoJSON Text Sequences\nstandard (and might contain\npretty-printed JSON) with the ASCII Record Separator (0x1e) as a delimiter, use\nthe --rs option\n\nWarning\nFuture change warning\nGeoJSON sequences (--sequence), not collections (--no-sequence), will be\nthe default in version 1.0.0.\n\nimport click\nimport cligj\nimport json\n\ndef process_features(features):\n    for feature in features:\n        # TODO process feature here\n        yield feature\n\n@click.command()\n@cligj.features_in_arg\n@cligj.sequence_opt\n@cligj.use_rs_opt\ndef pass_features(features, sequence, use_rs):\n    if sequence:\n        for feature in process_features(features):\n            if use_rs:\n                click.echo(u'\\x1e', nl=False)\n            click.echo(json.dumps(feature))\n    else:\n        click.echo(json.dumps(\n            {'type': 'FeatureCollection',\n             'features': list(process_features(features))}))\nOn the command line, the generated help text explains the usage\nUsage: pass_features [OPTIONS] FEATURES...\n\nOptions:\n--sequence / --no-sequence  Write a LF-delimited sequence of texts\n                            containing individual objects or write a single\n                            JSON text containing a feature collection object\n                            (the default).\n--rs / --no-rs              Use RS (0x1E) as a prefix for individual texts\n                            in a sequence as per http://tools.ietf.org/html\n                            /draft-ietf-json-text-sequence-13 (default is\n                            False).\n--help                      Show this message and exit.\nAnd can be used like this\n$ cat data.geojson\n{'type': 'FeatureCollection', 'features': [{'type': 'Feature', 'id': '1'}, {'type': 'Feature', 'id': '2'}]}\n\n$ pass_features data.geojson\n{'type': 'FeatureCollection', 'features': [{'type': 'Feature', 'id': '1'}, {'type': 'Feature', 'id': '2'}]}\n\n$ cat data.geojson | pass_features\n{'type': 'FeatureCollection', 'features': [{'type': 'Feature', 'id': '1'}, {'type': 'Feature', 'id': '2'}]}\n\n$ cat data.geojson | pass_features --sequence\n{'type': 'Feature', 'id': '1'}\n{'type': 'Feature', 'id': '2'}\n\n$ cat data.geojson | pass_features --sequence --rs\n^^{'type': 'Feature', 'id': '1'}\n^^{'type': 'Feature', 'id': '2'}\nIn this example, ^^ represents 0x1e.\n\n\n", "description": "Click extension for handling GeoJSON data on the command line"}, {"name": "click", "readme": "\nClick is a Python package for creating beautiful command line interfaces\nin a composable way with as little code as necessary. It\u2019s the \u201cCommand\nLine Interface Creation Kit\u201d. It\u2019s highly configurable but comes with\nsensible defaults out of the box.\nIt aims to make the process of writing command line tools quick and fun\nwhile also preventing any frustration caused by the inability to\nimplement an intended CLI API.\nClick in three points:\n\nArbitrary nesting of commands\nAutomatic help page generation\nSupports lazy loading of subcommands at runtime\n\n\nInstalling\nInstall and update using pip:\n$ pip install -U click\n\n\nA Simple Example\nimport click\n\n@click.command()\n@click.option(\"--count\", default=1, help=\"Number of greetings.\")\n@click.option(\"--name\", prompt=\"Your name\", help=\"The person to greet.\")\ndef hello(count, name):\n    \"\"\"Simple program that greets NAME for a total of COUNT times.\"\"\"\n    for _ in range(count):\n        click.echo(f\"Hello, {name}!\")\n\nif __name__ == '__main__':\n    hello()\n$ python hello.py --count=3\nYour name: Click\nHello, Click!\nHello, Click!\nHello, Click!\n\n\nDonate\nThe Pallets organization develops and supports Click and other popular\npackages. In order to grow the community of contributors and users, and\nallow the maintainers to devote more time to the projects, please\ndonate today.\n\n\nLinks\n\nDocumentation: https://click.palletsprojects.com/\nChanges: https://click.palletsprojects.com/changes/\nPyPI Releases: https://pypi.org/project/click/\nSource Code: https://github.com/pallets/click\nIssue Tracker: https://github.com/pallets/click/issues\nChat: https://discord.gg/pallets\n\n\n", "description": "Create beautiful command line interfaces in Python.", "category": "CLI"}, {"name": "click-plugins", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nclick-plugins\nWhy?\nEnabling Plugins\nDeveloping Plugins\nBroken and Incompatible Plugins\nBest Practices and Extra Credit\nInstallation\nDeveloping\nChangelog\nAuthors\nLicense\n\n\n\n\n\nREADME.rst\n\n\n\n\nclick-plugins\n\n\nAn extension module for click to register\nexternal CLI commands via setuptools entry-points.\n\nWhy?\nLets say you develop a commandline interface and someone requests a new feature\nthat is absolutely related to your project but would have negative consequences\nlike additional dependencies, major refactoring, or maybe its just too domain\nspecific to be supported directly.  Rather than developing a separate standalone\nutility you could offer up a setuptools entry point\nthat allows others to use your commandline utility as a home for their related\nsub-commands.  You get to choose where these sub-commands or sub-groups CAN be\nregistered but the plugin developer gets to choose they ARE registered.  You\ncould have all plugins register alongside the core commands, in a special\nsub-group, across multiple sub-groups, or some combination.\n\nEnabling Plugins\nFor a more detailed example see the examples section.\nThe only requirement is decorating click.group() with click_plugins.with_plugins()\nwhich handles attaching external commands and groups.  In this case the core CLI developer\nregisters CLI plugins from core_package.cli_plugins.\nfrom pkg_resources import iter_entry_points\n\nimport click\nfrom click_plugins import with_plugins\n\n\n@with_plugins(iter_entry_points('core_package.cli_plugins'))\n@click.group()\ndef cli():\n    \"\"\"Commandline interface for yourpackage.\"\"\"\n\n@cli.command()\ndef subcommand():\n    \"\"\"Subcommand that does something.\"\"\"\n\nDeveloping Plugins\nPlugin developers need to register their sub-commands or sub-groups to an\nentry-point in their setup.py that is loaded by the core package.\nfrom setuptools import setup\n\nsetup(\n    name='yourscript',\n    version='0.1',\n    py_modules=['yourscript'],\n    install_requires=[\n        'click',\n    ],\n    entry_points='''\n        [core_package.cli_plugins]\n        cool_subcommand=yourscript.cli:cool_subcommand\n        another_subcommand=yourscript.cli:another_subcommand\n    ''',\n)\n\nBroken and Incompatible Plugins\nAny sub-command or sub-group that cannot be loaded is caught and converted to\na click_plugins.core.BrokenCommand() rather than just crashing the entire\nCLI.  The short-help is converted to a warning message like:\nWarning: could not load plugin. See ``<CLI> <command/group> --help``.\nand if the sub-command or group is executed the entire traceback is printed.\n\nBest Practices and Extra Credit\nOpening a CLI to plugins encourages other developers to independently extend\nfunctionality independently but there is no guarantee these new features will\nbe \"on brand\".  Plugin developers are almost certainly already using features\nin the core package the CLI belongs to so defining commonly used arguments and\noptions in one place lets plugin developers reuse these flags to produce a more\ncohesive CLI.  If the CLI is simple maybe just define them at the top of\nyourpackage/cli.py or for more complex packages something like\nyourpackage/cli/options.py.  These common options need to be easy to find\nand be well documented so that plugin developers know what variable to give to\ntheir sub-command's function and what object they can expect to receive.  Don't\nforget to document non-obvious callbacks.\nKeep in mind that plugin developers also have access to the parent group's\nctx.obj, which is very useful for passing things like verbosity levels or\nconfig values around to sub-commands.\nHere's some code that sub-commands could re-use:\nfrom multiprocessing import cpu_count\n\nimport click\n\njobs_opt = click.option(\n    '-j', '--jobs', metavar='CORES', type=click.IntRange(min=1, max=cpu_count()), default=1,\n    show_default=True, help=\"Process data across N cores.\"\n)\nPlugin developers can access this with:\nimport click\nimport parent_cli_package.cli.options\n\n\n@click.command()\n@parent_cli_package.cli.options.jobs_opt\ndef subcommand(jobs):\n    \"\"\"I do something domain specific.\"\"\"\n\nInstallation\nWith pip:\n$ pip install click-plugins\nFrom source:\n$ git clone https://github.com/click-contrib/click-plugins.git\n$ cd click-plugins\n$ python setup.py install\n\nDeveloping\n$ git clone https://github.com/click-contrib/click-plugins.git\n$ cd click-plugins\n$ pip install -e .\\[dev\\]\n$ pytest tests --cov click_plugins --cov-report term-missing\n\nChangelog\nSee CHANGES.txt\n\nAuthors\nSee AUTHORS.txt\n\nLicense\nSee LICENSE.txt\n\n\n"}, {"name": "charset-normalizer", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nCharset Detection, for Everyone \ud83d\udc4b\n\u26a1 Performance\n\u2728 Installation\n\ud83d\ude80 Basic Usage\nCLI\nPython\n\ud83d\ude07 Why\n\ud83c\udf70 How\n\u26a1 Known limitations\n\u26a0\ufe0f About Python EOLs\n\ud83d\udc64 Contributing\n\ud83d\udcdd License\n\ud83d\udcbc For Enterprise\n\n\n\n\n\nREADME.md\n\n\n\n\nCharset Detection, for Everyone \ud83d\udc4b\n\nThe Real First Universal Charset Detector\n\n\n\n\n\n\n\n\n\n\n\nFeatured Packages\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nA library that helps you read text from an unknown charset encoding. Motivated by chardet,\nI'm trying to resolve the issue by taking a new approach.\nAll IANA character set names for which the Python core library provides codecs are supported.\n\n\n  >>>>> \ud83d\udc49 Try Me Online Now, Then Adopt Me \ud83d\udc48  <<<<<\n\nThis project offers you an alternative to Universal Charset Encoding Detector, also known as Chardet.\n\n\n\nFeature\nChardet\nCharset Normalizer\ncChardet\n\n\n\n\nFast\n\u274c\n\u2705\n\u2705\n\n\nUniversal**\n\u274c\n\u2705\n\u274c\n\n\nReliable without distinguishable standards\n\u274c\n\u2705\n\u2705\n\n\nReliable with distinguishable standards\n\u2705\n\u2705\n\u2705\n\n\nLicense\nLGPL-2.1restrictive\nMIT\nMPL-1.1restrictive\n\n\nNative Python\n\u2705\n\u2705\n\u274c\n\n\nDetect spoken language\n\u274c\n\u2705\nN/A\n\n\nUnicodeDecodeError Safety\n\u274c\n\u2705\n\u274c\n\n\nWhl Size (min)\n193.6 kB\n42 kB\n~200 kB\n\n\nSupported Encoding\n33\n\ud83c\udf89 99\n40\n\n\n\n\n\n** : They are clearly using specific code for a specific encoding even if covering most of used one\nDid you got there because of the logs? See https://charset-normalizer.readthedocs.io/en/latest/user/miscellaneous.html\n\u26a1 Performance\nThis package offer better performance than its counterpart Chardet. Here are some numbers.\n\n\n\nPackage\nAccuracy\nMean per file (ms)\nFile per sec (est)\n\n\n\n\nchardet\n86 %\n200 ms\n5 file/sec\n\n\ncharset-normalizer\n98 %\n10 ms\n100 file/sec\n\n\n\n\n\n\nPackage\n99th percentile\n95th percentile\n50th percentile\n\n\n\n\nchardet\n1200 ms\n287 ms\n23 ms\n\n\ncharset-normalizer\n100 ms\n50 ms\n5 ms\n\n\n\nChardet's performance on larger file (1MB+) are very poor. Expect huge difference on large payload.\n\nStats are generated using 400+ files using default parameters. More details on used files, see GHA workflows.\nAnd yes, these results might change at any time. The dataset can be updated to include more files.\nThe actual delays heavily depends on your CPU capabilities. The factors should remain the same.\nKeep in mind that the stats are generous and that Chardet accuracy vs our is measured using Chardet initial capability\n(eg. Supported Encoding) Challenge-them if you want.\n\n\u2728 Installation\nUsing pip:\npip install charset-normalizer -U\n\ud83d\ude80 Basic Usage\nCLI\nThis package comes with a CLI.\nusage: normalizer [-h] [-v] [-a] [-n] [-m] [-r] [-f] [-t THRESHOLD]\n                  file [file ...]\n\nThe Real First Universal Charset Detector. Discover originating encoding used\non text file. Normalize text to unicode.\n\npositional arguments:\n  files                 File(s) to be analysed\n\noptional arguments:\n  -h, --help            show this help message and exit\n  -v, --verbose         Display complementary information about file if any.\n                        Stdout will contain logs about the detection process.\n  -a, --with-alternative\n                        Output complementary possibilities if any. Top-level\n                        JSON WILL be a list.\n  -n, --normalize       Permit to normalize input file. If not set, program\n                        does not write anything.\n  -m, --minimal         Only output the charset detected to STDOUT. Disabling\n                        JSON output.\n  -r, --replace         Replace file when trying to normalize it instead of\n                        creating a new one.\n  -f, --force           Replace file without asking if you are sure, use this\n                        flag with caution.\n  -t THRESHOLD, --threshold THRESHOLD\n                        Define a custom maximum amount of chaos allowed in\n                        decoded content. 0. <= chaos <= 1.\n  --version             Show version information and exit.\n\nnormalizer ./data/sample.1.fr.srt\nor\npython -m charset_normalizer ./data/sample.1.fr.srt\n\ud83c\udf89 Since version 1.4.0 the CLI produce easily usable stdout result in JSON format.\n{\n    \"path\": \"/home/default/projects/charset_normalizer/data/sample.1.fr.srt\",\n    \"encoding\": \"cp1252\",\n    \"encoding_aliases\": [\n        \"1252\",\n        \"windows_1252\"\n    ],\n    \"alternative_encodings\": [\n        \"cp1254\",\n        \"cp1256\",\n        \"cp1258\",\n        \"iso8859_14\",\n        \"iso8859_15\",\n        \"iso8859_16\",\n        \"iso8859_3\",\n        \"iso8859_9\",\n        \"latin_1\",\n        \"mbcs\"\n    ],\n    \"language\": \"French\",\n    \"alphabets\": [\n        \"Basic Latin\",\n        \"Latin-1 Supplement\"\n    ],\n    \"has_sig_or_bom\": false,\n    \"chaos\": 0.149,\n    \"coherence\": 97.152,\n    \"unicode_path\": null,\n    \"is_preferred\": true\n}\nPython\nJust print out normalized text\nfrom charset_normalizer import from_path\n\nresults = from_path('./my_subtitle.srt')\n\nprint(str(results.best()))\nUpgrade your code without effort\nfrom charset_normalizer import detect\nThe above code will behave the same as chardet. We ensure that we offer the best (reasonable) BC result possible.\nSee the docs for advanced usage : readthedocs.io\n\ud83d\ude07 Why\nWhen I started using Chardet, I noticed that it was not suited to my expectations, and I wanted to propose a\nreliable alternative using a completely different method. Also! I never back down on a good challenge!\nI don't care about the originating charset encoding, because two different tables can\nproduce two identical rendered string.\nWhat I want is to get readable text, the best I can.\nIn a way, I'm brute forcing text decoding. How cool is that ? \ud83d\ude0e\nDon't confuse package ftfy with charset-normalizer or chardet. ftfy goal is to repair unicode string whereas charset-normalizer to convert raw file in unknown encoding to unicode.\n\ud83c\udf70 How\n\nDiscard all charset encoding table that could not fit the binary content.\nMeasure noise, or the mess once opened (by chunks) with a corresponding charset encoding.\nExtract matches with the lowest mess detected.\nAdditionally, we measure coherence / probe for a language.\n\nWait a minute, what is noise/mess and coherence according to YOU ?\nNoise : I opened hundred of text files, written by humans, with the wrong encoding table. I observed, then\nI established some ground rules about what is obvious when it seems like a mess.\nI know that my interpretation of what is noise is probably incomplete, feel free to contribute in order to\nimprove or rewrite it.\nCoherence : For each language there is on earth, we have computed ranked letter appearance occurrences (the best we can). So I thought\nthat intel is worth something here. So I use those records against decoded text to check if I can detect intelligent design.\n\u26a1 Known limitations\n\nLanguage detection is unreliable when text contains two or more languages sharing identical letters. (eg. HTML (english tags) + Turkish content (Sharing Latin characters))\nEvery charset detector heavily depends on sufficient content. In common cases, do not bother run detection on very tiny content.\n\n\u26a0\ufe0f About Python EOLs\nIf you are running:\n\nPython >=2.7,<3.5: Unsupported\nPython 3.5: charset-normalizer < 2.1\nPython 3.6: charset-normalizer < 3.1\nPython 3.7: charset-normalizer < 4.0\n\nUpgrade your Python interpreter as soon as possible.\n\ud83d\udc64 Contributing\nContributions, issues and feature requests are very much welcome.\nFeel free to check issues page if you want to contribute.\n\ud83d\udcdd License\nCopyright \u00a9 Ahmed TAHRI @Ousret.\nThis project is MIT licensed.\nCharacters frequencies used in this project \u00a9 2012 Denny Vrande\u010di\u0107\n\ud83d\udcbc For Enterprise\nProfessional support for charset-normalizer is available as part of the Tidelift\nSubscription. Tidelift gives software development teams a single source for\npurchasing and maintaining their software, with professional grade assurances\nfrom the experts who know it best, while seamlessly integrating with existing\ntools.\n\n\n"}, {"name": "chardet", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nChardet: The Universal Character Encoding Detector\nInstallation\nDocumentation\nCommand-line Tool\nAbout\n\n\n\n\n\nREADME.rst\n\n\n\n\n\nChardet: The Universal Character Encoding Detector\n\n\n\n\n\n\n\n\nDetects\n\nASCII, UTF-8, UTF-16 (2 variants), UTF-32 (4 variants)\nBig5, GB2312, EUC-TW, HZ-GB-2312, ISO-2022-CN (Traditional and Simplified Chinese)\nEUC-JP, SHIFT_JIS, CP932, ISO-2022-JP (Japanese)\nEUC-KR, ISO-2022-KR, Johab (Korean)\nKOI8-R, MacCyrillic, IBM855, IBM866, ISO-8859-5, windows-1251 (Cyrillic)\nISO-8859-5, windows-1251 (Bulgarian)\nISO-8859-1, windows-1252, MacRoman (Western European languages)\nISO-8859-7, windows-1253 (Greek)\nISO-8859-8, windows-1255 (Visual and Logical Hebrew)\nTIS-620 (Thai)\n\n\n\n\nNote\nOur ISO-8859-2 and windows-1250 (Hungarian) probers have been temporarily\ndisabled until we can retrain the models.\n\nRequires Python 3.7+.\n\nInstallation\nInstall from PyPI:\npip install chardet\n\n\nDocumentation\nFor users, docs are now available at https://chardet.readthedocs.io/.\n\nCommand-line Tool\nchardet comes with a command-line script which reports on the encodings of one\nor more files:\n% chardetect somefile someotherfile\nsomefile: windows-1252 with confidence 0.5\nsomeotherfile: ascii with confidence 1.0\n\n\nAbout\nThis is a continuation of Mark Pilgrim's excellent original chardet port from C, and Ian Cordasco's\ncharade Python 3-compatible fork.\n\n\nmaintainer:Dan Blanchard\n\n\n\n\n\n", "description": "Universal character encoding detector."}, {"name": "cffi", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n", "description": "C Foreign Function Interface for Python."}, {"name": "catalogue", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ncatalogue: Super lightweight function registries for your library\n\u23f3 Installation\n\ud83d\udc69\u200d\ud83d\udcbb Usage\n\u2753 FAQ\nBut can't the user just pass in the custom_loader function directly?\nHow do I make sure all of the registration decorators have run?\n\ud83c\udf9b API\nfunctioncatalogue.create\nclassRegistry\nmethodRegistry.__init__\nmethodRegistry.__contains__\nmethodRegistry.__call__\nmethodRegistry.register\nmethodRegistry.get\nmethodRegistry.get_all\nmethodRegistry.get_entry_points\nmethodRegistry.get_entry_point\nmethodRegistry.find\nfunctioncatalogue.check_exists\n\n\n\n\n\nREADME.md\n\n\n\n\n\ncatalogue: Super lightweight function registries for your library\ncatalogue is a tiny, zero-dependencies library that makes it easy to add\nfunction (or object) registries to your code. Function registries are helpful\nwhen you have objects that need to be both easily serializable and fully\ncustomizable. Instead of passing a function into your object, you pass in an\nidentifier name, which the object can use to lookup the function from the\nregistry. This makes the object easy to serialize, because the name is a simple\nstring. If you instead saved the function, you'd have to use Pickle for\nserialization, which has many drawbacks.\n\n\n\n\n\n\u23f3 Installation\npip install catalogue\nconda install -c conda-forge catalogue\n\n\u26a0\ufe0f Important note: catalogue v3.0+ is only compatible with Python 3.8+.\nFor Python 3.6+ compatibility, use catalogue v2.x and for Python 2.7+\ncompatibility, use catalogue v1.x.\n\n\ud83d\udc69\u200d\ud83d\udcbb Usage\nLet's imagine you're developing a Python package that needs to load data\nsomewhere. You've already implemented some loader functions for the most common\ndata types, but you want to allow the user to easily add their own. Using\ncatalogue.create you can create a new registry under the namespace\nyour_package \u2192 loaders.\n# YOUR PACKAGE\nimport catalogue\n\nloaders = catalogue.create(\"your_package\", \"loaders\")\nThis gives you a loaders.register decorator that your users can import and\ndecorate their custom loader functions with.\n# USER CODE\nfrom your_package import loaders\n\n@loaders.register(\"custom_loader\")\ndef custom_loader(data):\n    # Load something here...\n    return data\nThe decorated function will be registered automatically and in your package,\nyou'll be able to access all loaders by calling loaders.get_all.\n# YOUR PACKAGE\ndef load_data(data, loader_id):\n    print(\"All loaders:\", loaders.get_all()) # {\"custom_loader\": <custom_loader>}\n    loader = loaders.get(loader_id)\n    return loader(data)\nThe user can now refer to their custom loader using only its string name\n(\"custom_loader\") and your application will know what to do and will use their\ncustom function.\n# USER CODE\nfrom your_package import load_data\n\nload_data(data, loader_id=\"custom_loader\")\n\u2753 FAQ\nBut can't the user just pass in the custom_loader function directly?\nSure, that's the more classic callback approach. Instead of a string ID,\nload_data could also take a function, in which case you wouldn't need a\npackage like this. catalogue helps you when you need to produce a serializable\nrecord of which functions were passed in. For instance, you might want to write\na log message, or save a config to load back your object later. With\ncatalogue, your functions can be parameterized by strings, so logging and\nserialization remains easy \u2013 while still giving you full extensibility.\nHow do I make sure all of the registration decorators have run?\nDecorators normally run when modules are imported. Relying on this side-effect\ncan sometimes lead to confusion, especially if there's no other reason the\nmodule would be imported. One solution is to use\nentry points.\nFor instance, in spaCy we're starting to use function\nregistries to make the pipeline components much more customizable. Let's say one\nuser, Jo, develops a better tagging model using new machine learning research.\nEnd-users of Jo's package should be able to write\nspacy.load(\"jo_tagging_model\"). They shouldn't need to remember to write\nimport jos_tagged_model first, just to run the function registries as a\nside-effect. With entry points, the registration happens at install time \u2013 so\nyou don't need to rely on the import side-effects.\n\ud83c\udf9b API\nfunction catalogue.create\nCreate a new registry for a given namespace. Returns a setter function that can\nbe used as a decorator or called with a name and func keyword argument. If\nentry_points=True is set, the registry will check for\nPython entry points\nadvertised for the given namespace, e.g. the entry point group\nspacy_architectures for the namespace \"spacy\", \"architectures\", in\nRegistry.get and Registry.get_all. This allows other packages to\nauto-register functions.\n\n\n\nArgument\nType\nDescription\n\n\n\n\n*namespace\nstr\nThe namespace, e.g. \"spacy\" or \"spacy\", \"architectures\".\n\n\nentry_points\nbool\nWhether to check for entry points of the given namespace and pre-populate the global registry.\n\n\nRETURNS\nRegistry\nThe Registry object with methods to register and retrieve functions.\n\n\n\narchitectures = catalogue.create(\"spacy\", \"architectures\")\n\n# Use as decorator\n@architectures.register(\"custom_architecture\")\ndef custom_architecture():\n    pass\n\n# Use as regular function\narchitectures.register(\"custom_architecture\", func=custom_architecture)\nclass Registry\nThe registry object that can be used to register and retrieve functions. It's\nusually created internally when you call catalogue.create.\nmethod Registry.__init__\nInitialize a new registry. If entry_points=True is set, the registry will\ncheck for\nPython entry points\nadvertised for the given namespace, e.g. the entry point group\nspacy_architectures for the namespace \"spacy\", \"architectures\", in\nRegistry.get and Registry.get_all.\n\n\n\nArgument\nType\nDescription\n\n\n\n\nnamespace\nTuple[str]\nThe namespace, e.g. \"spacy\" or \"spacy\", \"architectures\".\n\n\nentry_points\nbool\nWhether to check for entry points of the given namespace in get and get_all.\n\n\nRETURNS\nRegistry\nThe newly created object.\n\n\n\n# User-facing API\narchitectures = catalogue.create(\"spacy\", \"architectures\")\n# Internal API\narchitectures = Registry((\"spacy\", \"architectures\"))\nmethod Registry.__contains__\nCheck whether a name is in the registry.\n\n\n\nArgument\nType\nDescription\n\n\n\n\nname\nstr\nThe name to check.\n\n\nRETURNS\nbool\nWhether the name is in the registry.\n\n\n\narchitectures = catalogue.create(\"spacy\", \"architectures\")\n\n@architectures.register(\"custom_architecture\")\ndef custom_architecture():\n    pass\n\nassert \"custom_architecture\" in architectures\nmethod Registry.__call__\nRegister a function in the registry's namespace. Can be used as a decorator or\ncalled as a function with the func keyword argument supplying the function to\nregister. Delegates to Registry.register.\nmethod Registry.register\nRegister a function in the registry's namespace. Can be used as a decorator or\ncalled as a function with the func keyword argument supplying the function to\nregister.\n\n\n\nArgument\nType\nDescription\n\n\n\n\nname\nstr\nThe name to register under the namespace.\n\n\nfunc\nAny\nOptional function to register (if not used as decorator).\n\n\nRETURNS\nCallable\nThe decorator that takes one argument, the name.\n\n\n\narchitectures = catalogue.create(\"spacy\", \"architectures\")\n\n# Use as decorator\n@architectures.register(\"custom_architecture\")\ndef custom_architecture():\n    pass\n\n# Use as regular function\narchitectures.register(\"custom_architecture\", func=custom_architecture)\nmethod Registry.get\nGet a function registered in the namespace.\n\n\n\nArgument\nType\nDescription\n\n\n\n\nname\nstr\nThe name.\n\n\nRETURNS\nAny\nThe registered function.\n\n\n\ncustom_architecture = architectures.get(\"custom_architecture\")\nmethod Registry.get_all\nGet all functions in the registry's namespace.\n\n\n\nArgument\nType\nDescription\n\n\n\n\nRETURNS\nDict[str, Any]\nThe registered functions, keyed by name.\n\n\n\nall_architectures = architectures.get_all()\n# {\"custom_architecture\": <custom_architecture>}\nmethod Registry.get_entry_points\nGet registered entry points from other packages for this namespace. The name of\nthe entry point group is the namespace joined by _.\n\n\n\nArgument\nType\nDescription\n\n\n\n\nRETURNS\nDict[str, Any]\nThe loaded entry points, keyed by name.\n\n\n\narchitectures = catalogue.create(\"spacy\", \"architectures\", entry_points=True)\n# Will get all entry points of the group \"spacy_architectures\"\nall_entry_points = architectures.get_entry_points()\nmethod Registry.get_entry_point\nCheck if registered entry point is available for a given name in the namespace\nand load it. Otherwise, return the default value.\n\n\n\nArgument\nType\nDescription\n\n\n\n\nname\nstr\nName of entry point to load.\n\n\ndefault\nAny\nThe default value to return. Defaults to None.\n\n\nRETURNS\nAny\nThe loaded entry point or the default value.\n\n\n\narchitectures = catalogue.create(\"spacy\", \"architectures\", entry_points=True)\n# Will get entry point \"custom_architecture\" of the group \"spacy_architectures\"\ncustom_architecture = architectures.get_entry_point(\"custom_architecture\")\nmethod Registry.find\nFind the information about a registered function, including the module and path\nto the file it's defined in, the line number and the docstring, if available.\n\n\n\nArgument\nType\nDescription\n\n\n\n\nname\nstr\nName of the registered function.\n\n\nRETURNS\nDict[str, Union[str, int]]\nThe information about the function.\n\n\n\nimport catalogue\n\narchitectures = catalogue.create(\"spacy\", \"architectures\", entry_points=True)\n\n@architectures(\"my_architecture\")\ndef my_architecture():\n    \"\"\"This is an architecture\"\"\"\n    pass\n\ninfo = architectures.find(\"my_architecture\")\n# {'module': 'your_package.architectures',\n#  'file': '/path/to/your_package/architectures.py',\n#  'line_no': 5,\n#  'docstring': 'This is an architecture'}\nfunction catalogue.check_exists\nCheck if a namespace exists.\n\n\n\nArgument\nType\nDescription\n\n\n\n\n*namespace\nstr\nThe namespace, e.g. \"spacy\" or \"spacy\", \"architectures\".\n\n\nRETURNS\nbool\nWhether the namespace exists.\n\n\n\n\n\n", "description": "Tiny Python library for adding lightweight registries of functions."}, {"name": "camelot-py", "readme": "\n\n\n\nCamelot: PDF Table Extraction for Humans\n \n\n   \n\nCamelot is a Python library that can help you extract tables from PDFs!\nNote: You can also check out Excalibur, the web interface to Camelot!\n\nHere's how you can extract tables from PDFs. You can check out the PDF used in this example here.\n>>> import camelot\n>>> tables = camelot.read_pdf('foo.pdf')\n>>> tables\n<TableList n=1>\n>>> tables.export('foo.csv', f='csv', compress=True) # json, excel, html, markdown, sqlite\n>>> tables[0]\n<Table shape=(7, 7)>\n>>> tables[0].parsing_report\n{\n    'accuracy': 99.02,\n    'whitespace': 12.24,\n    'order': 1,\n    'page': 1\n}\n>>> tables[0].to_csv('foo.csv') # to_json, to_excel, to_html, to_markdown, to_sqlite\n>>> tables[0].df # get a pandas DataFrame!\n\n\n\n\nCycle Name\nKI (1/km)\nDistance (mi)\nPercent Fuel Savings\n\n\n\n\n\n\n\n\n\n\nImproved Speed\nDecreased Accel\nEliminate Stops\nDecreased Idle\n\n\n2012_2\n3.30\n1.3\n5.9%\n9.5%\n29.2%\n17.4%\n\n\n2145_1\n0.68\n11.2\n2.4%\n0.1%\n9.5%\n2.7%\n\n\n4234_1\n0.59\n58.7\n8.5%\n1.3%\n8.5%\n3.3%\n\n\n2032_2\n0.17\n57.8\n21.7%\n0.3%\n2.7%\n1.2%\n\n\n4171_1\n0.07\n173.9\n58.1%\n1.6%\n2.1%\n0.5%\n\n\n\nCamelot also comes packaged with a command-line interface!\nNote: Camelot only works with text-based PDFs and not scanned documents. (As Tabula explains, \"If you can click and drag to select text in your table in a PDF viewer, then your PDF is text-based\".)\nYou can check out some frequently asked questions here.\nWhy Camelot?\n\nConfigurability: Camelot gives you control over the table extraction process with tweakable settings.\nMetrics: You can discard bad tables based on metrics like accuracy and whitespace, without having to manually look at each table.\nOutput: Each table is extracted into a pandas DataFrame, which seamlessly integrates into ETL and data analysis workflows. You can also export tables to multiple formats, which include CSV, JSON, Excel, HTML, Markdown, and Sqlite.\n\nSee comparison with similar libraries and tools.\nSupport the development\nIf Camelot has helped you, please consider supporting its development with a one-time or monthly donation on OpenCollective.\nInstallation\nUsing conda\nThe easiest way to install Camelot is with conda, which is a package manager and environment management system for the Anaconda distribution.\n$ conda install -c conda-forge camelot-py\n\nUsing pip\nAfter installing the dependencies (tk and ghostscript), you can also just use pip to install Camelot:\n$ pip install \"camelot-py[base]\"\n\nFrom the source code\nAfter installing the dependencies, clone the repo using:\n$ git clone https://www.github.com/camelot-dev/camelot\n\nand install Camelot using pip:\n$ cd camelot\n$ pip install \".[base]\"\n\nDocumentation\nThe documentation is available at http://camelot-py.readthedocs.io/.\nWrappers\n\ncamelot-php provides a PHP wrapper on Camelot.\n\nContributing\nThe Contributor's Guide has detailed information about contributing issues, documentation, code, and tests.\nVersioning\nCamelot uses Semantic Versioning. For the available versions, see the tags on this repository. For the changelog, you can check out HISTORY.md.\nLicense\nThis project is licensed under the MIT License, see the LICENSE file for details.\n"}, {"name": "CairoSVG", "readme": "\n\n\n\nREADME.rst\n\n\n\n\nCairoSVG is an SVG converter based on Cairo. It can export SVG files to PDF,\nEPS, PS, and PNG files.\n\nFree software: LGPL license\nFor Python 3.7+, tested on CPython and PyPy\nDocumentation: https://cairosvg.org/documentation/\nChangelog: https://github.com/Kozea/CairoSVG/releases\nCode, issues, tests: https://github.com/Kozea/CairoSVG\nCode of conduct: https://www.courtbouillon.org/code-of-conduct\nProfessional support: https://www.courtbouillon.org\nDonation: https://opencollective.com/courtbouillon\n\nCairoSVG has been created and developed by Kozea (https://kozea.fr).\nProfessional support, maintenance and community management is provided by\nCourtBouillon (https://www.courtbouillon.org).\nCopyrights are retained by their contributors, no copyright assignment is\nrequired to contribute to CairoSVG. Unless explicitly stated otherwise, any\ncontribution intentionally submitted for inclusion is licensed under the LGPL\nlicense, without any additional terms or conditions. For full\nauthorship information, see the version control history.\n\n\n", "description": "SVG converter based on Cairo that can export SVG files to PDF, PS, PNG etc."}, {"name": "cairocffi", "readme": "\n\n\n\nREADME.rst\n\n\n\n\ncairocffi is a CFFI-based drop-in replacement for Pycairo,\na set of Python bindings and object-oriented API for cairo.\nCairo is a 2D vector graphics library with support for multiple backends\nincluding image buffers, PNG, PostScript, PDF, and SVG file output.\nAdditionally, the cairocffi.pixbuf module uses GDK-PixBuf\nto decode various image formats for use in cairo.\n\nFree software: BSD license\nFor Python 3.7+, tested on CPython and PyPy\nDocumentation: https://doc.courtbouillon.org/cairocffi/\nChangelog: https://doc.courtbouillon.org/cairocffi/stable/changelog.html\nCode, issues, tests: https://github.com/Kozea/cairocffi\nCode of conduct: https://www.courtbouillon.org/code-of-conduct\nProfessional support: https://www.courtbouillon.org\nDonation: https://opencollective.com/courtbouillon\nAPI partially compatible with Pycairo.\nWorks with any version of cairo.\n\ncairocffi has been created and developed by Kozea (https://kozea.fr).\nProfessional support, maintenance and community management is provided by\nCourtBouillon (https://www.courtbouillon.org).\nCopyrights are retained by their contributors, no copyright assignment is\nrequired to contribute to cairocffi. Unless explicitly stated otherwise, any\ncontribution intentionally submitted for inclusion is licensed under the BSD\n3-clause license, without any additional terms or conditions. For full\nauthorship information, see the version control history.\n\n\n", "description": "CFFI-based drop-in replacement for Pycairo with support for multiple cairo backends."}, {"name": "cachetools", "readme": "\n\n\n\n\n\n\n\n\n\n\n\ncachetools\nInstallation\nProject Resources\nRelated Projects\nLicense\n\n\n\n\n\nREADME.rst\n\n\n\n\ncachetools\n\n\n\n\n\n\n\n\n\nThis module provides various memoizing collections and decorators,\nincluding variants of the Python Standard Library's @lru_cache\nfunction decorator.\nfrom cachetools import cached, LRUCache, TTLCache\n\n# speed up calculating Fibonacci numbers with dynamic programming\n@cached(cache={})\ndef fib(n):\n    return n if n < 2 else fib(n - 1) + fib(n - 2)\n\n# cache least recently used Python Enhancement Proposals\n@cached(cache=LRUCache(maxsize=32))\ndef get_pep(num):\n    url = 'http://www.python.org/dev/peps/pep-%04d/' % num\n    with urllib.request.urlopen(url) as s:\n        return s.read()\n\n# cache weather data for no longer than ten minutes\n@cached(cache=TTLCache(maxsize=1024, ttl=600))\ndef get_weather(place):\n    return owm.weather_at_place(place).get_weather()\nFor the purpose of this module, a cache is a mutable mapping of a\nfixed maximum size.  When the cache is full, i.e. by adding another\nitem the cache would exceed its maximum size, the cache must choose\nwhich item(s) to discard based on a suitable cache algorithm.\nThis module provides multiple cache classes based on different cache\nalgorithms, as well as decorators for easily memoizing function and\nmethod calls.\n\nInstallation\ncachetools is available from PyPI and can be installed by running:\npip install cachetools\n\nTyping stubs for this package are provided by typeshed and can be\ninstalled by running:\npip install types-cachetools\n\n\nProject Resources\n\nDocumentation\nIssue tracker\nSource code\nChange log\n\n\nRelated Projects\n\nasyncache: Helpers to use cachetools with async functions\nCacheToolsUtils: Cachetools Utilities\nkids.cache: Kids caching library\nshelved-cache: Persistent cache for Python cachetools\n\n\nLicense\nCopyright (c) 2014-2023 Thomas Kemmer.\nLicensed under the MIT License.\n\n\n", "description": "Module with memoizing collections and decorators like @lru_cache."}, {"name": "Brotli", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nIntroduction\nBuild instructions\nVcpkg\nBazel\nCMake\nPython\nContributing\nBenchmarks\nRelated projects\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\n\n\n\nIntroduction\nBrotli is a generic-purpose lossless compression algorithm that compresses data\nusing a combination of a modern variant of the LZ77 algorithm, Huffman coding\nand 2nd order context modeling, with a compression ratio comparable to the best\ncurrently available general-purpose compression methods. It is similar in speed\nwith deflate but offers more dense compression.\nThe specification of the Brotli Compressed Data Format is defined in RFC 7932.\nBrotli is open-sourced under the MIT License, see the LICENSE file.\n\nPlease note: brotli is a \"stream\" format; it does not contain\nmeta-information, like checksums or uncompresssed data length. It is possible\nto modify \"raw\" ranges of the compressed stream and the decoder will not\nnotice that.\n\nBuild instructions\nVcpkg\nYou can download and install brotli using the vcpkg dependency manager:\ngit clone https://github.com/Microsoft/vcpkg.git\ncd vcpkg\n./bootstrap-vcpkg.sh\n./vcpkg integrate install\n./vcpkg install brotli\n\nThe brotli port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please create an issue or pull request on the vcpkg repository.\nBazel\nSee Bazel\nCMake\nThe basic commands to build and install brotli are:\n$ mkdir out && cd out\n$ cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=./installed ..\n$ cmake --build . --config Release --target install\n\nYou can use other CMake configuration.\nPython\nTo install the latest release of the Python module, run the following:\n$ pip install brotli\n\nTo install the tip-of-the-tree version, run:\n$ pip install --upgrade git+https://github.com/google/brotli\n\nSee the Python readme for more details on installing\nfrom source, development, and testing.\nContributing\nWe glad to answer/library related questions in\nbrotli mailing list.\nRegular issues / feature requests should be reported in\nissue tracker.\nFor reporting vulnerability please read SECURITY.\nFor contributing changes please read CONTRIBUTING.\nBenchmarks\n\nSquash Compression Benchmark / Unstable Squash Compression Benchmark\nLarge Text Compression Benchmark\nLzturbo Benchmark\n\nRelated projects\n\nDisclaimer: Brotli authors take no responsibility for the third party projects mentioned in this section.\n\nIndependent decoder implementation by Mark Adler, based entirely on format specification.\nJavaScript port of brotli decoder. Could be used directly via npm install brotli\nHand ported decoder / encoder in haxe by Dominik Homberger. Output source code: JavaScript, PHP, Python, Java and C#\n7Zip plugin\nDart native bindings\nDart compression framework with fast FFI-based Brotli implementation with ready-to-use prebuilt binaries for Win/Linux/Mac\n\n\n", "description": "Compress and decompress data with the Brotli algorithm"}, {"name": "branca", "readme": "\n\n\n\nREADME.md\n\n\n\n\n\n\n\nBranca\nThis library is a spinoff from folium. It can be used to generate HTML + JS. It is based on Jinja2.\n\nDocumentation: https://python-visualization.github.io/branca/\nExamples: https://nbviewer.org/github/python-visualization/branca/tree/main/examples/\n\n\n\n", "description": "Generate HTML+JS maps and visualization widgets"}, {"name": "bokeh", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nInstallation\nResources\nFollow us\nSupport\nFiscal Support\nIn-kind Support\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\n\n\n\nBokeh is an\u00a0interactive visualization\u00a0library for modern web browsers. It provides elegant, concise\u00a0construction\u00a0of versatile graphics and affords high-performance interactivity across large or streaming datasets.\u00a0Bokeh can help anyone who wants to create interactive plots, dashboards, and data applications quickly and easily.\n\n\nPackage\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nProject\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nDownloads\n\n\n\n\n\n\n\n\n\n\n\n\nBuild\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nCommunity\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nConsider making a donation if you enjoy using Bokeh and want to support its development.\n\nInstallation\nTo install Bokeh and its required dependencies using pip, enter the following command at a Bash or Windows command prompt:\npip install bokeh\n\nTo install conda, enter the following command at a Bash or Windows command prompt:\nconda install bokeh\n\nRefer to the installation documentation for more details.\nResources\nOnce Bokeh is installed, check out the first steps guides.\nVisit the full documentation site to view the User's Guide or launch the Bokeh tutorial to learn about Bokeh in live Jupyter Notebooks.\nCommunity support is available on the Project Discourse.\nIf you would like to contribute to Bokeh, please review the Contributor Guide and request an invitation to the Bokeh Dev Slack workspace.\nNote: Everyone who engages in the Bokeh project's discussion forums, codebases, and issue trackers is expected to follow the Code of Conduct.\nFollow us\nFollow us on Twitter @bokeh\nSupport\nFiscal Support\nThe Bokeh project is grateful for individual contributions, as well as for monetary support from the organizations and companies listed below:\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nIf your company uses Bokeh and is able to sponsor the project, please contact info@bokeh.org\nBokeh is a Sponsored Project of NumFOCUS, a 501(c)(3) nonprofit charity in the United States. NumFOCUS provides Bokeh with fiscal, legal, and administrative support to help ensure the health and sustainability of the project. Visit numfocus.org for more information.\nDonations to Bokeh are managed by NumFOCUS. For donors in the United States, your gift is tax-deductible to the extent provided by law. As with any donation, you should consult with your tax adviser about your particular tax situation.\nIn-kind Support\nNon-monetary support can help with development, collaboration, infrastructure, security, and vulnerability management. The Bokeh project is grateful to the following companies for their donation of services:\n\nAmazon Web Services\nGitGuardian\nGitHub\nmakepath\nPingdom\nSlack\nQuestionScout\n1Password\n\n\n\n", "description": "Interactive web visualization library for Python."}, {"name": "blis", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nCython BLIS: Fast BLAS-like operations from Python and Cython, without the tears\nInstallation\nBuilding BLIS for alternative architectures\na) Install with auto-detected CPU support\nb) Install using an existing configuration\nc) Install with generic arch support\nd) Build specific support\nUsage\nDevelopment\nUpdating the build files\nLinux\n\n\n\n\n\nREADME.md\n\n\n\n\n\nCython BLIS: Fast BLAS-like operations from Python and Cython, without the tears\nThis repository provides the\nBlis linear algebra routines as a\nself-contained Python C-extension.\nCurrently, we only supports single-threaded execution, as this is actually best\nfor our workloads (ML inference).\n\n\n\n\nInstallation\nYou can install the package via pip, first making sure that pip, setuptools,\nand wheel are up-to-date:\npip install -U pip setuptools wheel\npip install blis\nWheels should be available, so installation should be fast. If you want to\ninstall from source and you're on Windows, you'll need to install LLVM.\nBuilding BLIS for alternative architectures\nThe provided wheels should work on x86_64 and osx/arm64 architectures.\nUnfortunately we do not currently know a way to provide different wheels for\nalternative architectures, and we cannot provide a single binary that works\neverywhere. So if the wheel doesn't work for your CPU, you'll need to specify\nsource distribution, and tell Blis your CPU architecture using the BLIS_ARCH\nenvironment variable.\na) Install with auto-detected CPU support\npip install spacy --no-binary blis\nb) Install using an existing configuration\nProvide an architecture from the\nsupported configurations.\nBLIS_ARCH=\"power9\" pip install spacy --no-binary blis\nc) Install with generic arch support\n\n\u26a0\ufe0f generic is not optimized for any particular CPU and is extremely slow.\nOnly recommended for testing!\n\nBLIS_ARCH=\"generic\" pip install spacy --no-binary blis\nd) Build specific support\nIn order to compile Blis, cython-blis bundles makefile scripts for specific\narchitectures, that are compiled by running the Blis build system and logging\nthe commands. We do not yet have logs for every architecture, as there are some\narchitectures we have not had access to.\nSee here for list of\narchitectures. For example, here's how to build support for the Intel\narchitecture knl:\ngit clone https://github.com/explosion/cython-blis && cd cython-blis\ngit pull && git submodule init && git submodule update && git submodule status\npython3 -m venv venv\nsource venv/bin/activate\npip install -U pip setuptools wheel\npip install -r requirements.txt\n./bin/generate-make-jsonl linux knl\nBLIS_ARCH=\"knl\" python setup.py build_ext --inplace\nBLIS_ARCH=\"knl\" python setup.py bdist_wheel\nFingers crossed, this will build you a wheel that supports your platform. You\ncould then submit a PR with\nthe blis/_src/make/linux-knl.jsonl and blis/_src/include/linux-knl/blis.h\nfiles so that you can run:\nBLIS_ARCH=\"knl\" pip install --no-binary=blis\nUsage\nTwo APIs are provided: a high-level Python API, and direct\nCython access, which provides fused-type, nogil Cython\nbindings to the underlying Blis linear algebra library. Fused types are a simple\ntemplate mechanism, allowing just a touch of compile-time generic programming:\ncimport blis.cy\nA = <float*>calloc(nN * nI, sizeof(float))\nB = <float*>calloc(nO * nI, sizeof(float))\nC = <float*>calloc(nr_b0 * nr_b1, sizeof(float))\nblis.cy.gemm(blis.cy.NO_TRANSPOSE, blis.cy.NO_TRANSPOSE,\n             nO, nI, nN,\n             1.0, A, nI, 1, B, nO, 1,\n             1.0, C, nO, 1)\nBindings have been added as we've needed them. Please submit pull requests if\nthe library is missing some functions you require.\nDevelopment\nTo build the source package, you should run the following command:\n./bin/update-vendored-source\nThis populates the blis/_src folder for the various architectures, using the\nflame-blis submodule.\nUpdating the build files\nIn order to compile the Blis sources, we use jsonl files that provide the\nexplicit compiler flags. We build these jsonl files by running Blis's build\nsystem, and then converting the log. This avoids us having to replicate the\nbuild system within Python: we just use the jsonl to make a bunch of subprocess\ncalls. To support a new OS/architecture combination, we have to provide the\njsonl file and the header.\nLinux\nThe Linux build files need to be produced from within the manylinux2014 Docker\ncontainer, so that they will be compatible with the wheel building process.\nFirst, install docker. Then do the following to start the container:\nsudo docker run -it quay.io/pypa/manylinux2014_x86_64:latest\n\nOnce within the container, the following commands should check out the repo and\nbuild the jsonl files for the generic arch:\nmkdir /usr/local/repos\ncd /usr/local/repos\ngit clone https://github.com/explosion/cython-blis && cd cython-blis\ngit pull && git submodule init && git submodule update && git submodule\nstatus\n/opt/python/cp36-cp36m/bin/python -m venv env3.6\nsource env3.6/bin/activate\npip install -r requirements.txt\n./bin/generate-make-jsonl linux generic --export\nBLIS_ARCH=generic python setup.py build_ext --inplace\n# N.B.: don't copy to /tmp, docker cp doesn't work from there.\ncp blis/_src/include/linux-generic/blis.h /linux-generic-blis.h\ncp blis/_src/make/linux-generic.jsonl /\n\nThen from a new terminal, retrieve the two files we need out of the container:\nsudo docker ps -l # Get the container ID\n# When I'm in Vagrant, I need to go via cat -- but then I end up with dummy\n# lines at the top and bottom. Sigh. If you don't have that problem and\n# sudo docker cp just works, just copy the file.\nsudo docker cp aa9d42588791:/linux-generic-blis.h - | cat > linux-generic-blis.h\nsudo docker cp aa9d42588791:/linux-generic.jsonl - | cat > linux-generic.jsonl\n\n\n\n", "description": "Fast BLAS-like linear algebra operations for Python and Cython"}, {"name": "blinker", "readme": "\nBlinker provides a fast dispatching system that allows any number of\ninterested parties to subscribe to events, or \u201csignals\u201d.\nSignal receivers can subscribe to specific senders or receive signals\nsent by any sender.\n>>> from blinker import signal\n>>> started = signal('round-started')\n>>> def each(round):\n...     print(f\"Round {round}\")\n...\n>>> started.connect(each)\n\n>>> def round_two(round):\n...     print(\"This is round two.\")\n...\n>>> started.connect(round_two, sender=2)\n\n>>> for round in range(1, 4):\n...     started.send(round)\n...\nRound 1!\nRound 2!\nThis is round two.\nRound 3!\n\nLinks\n\nDocumentation: https://blinker.readthedocs.io/\nChanges: https://blinker.readthedocs.io/#changes\nPyPI Releases: https://pypi.org/project/blinker/\nSource Code: https://github.com/pallets-eco/blinker/\nIssue Tracker: https://github.com/pallets-eco/blinker/issues/\n\n\n", "description": "Fast object-to-object and broadcast signaling"}, {"name": "bleach", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nBleach\nReporting Bugs\nSecurity\nInstalling Bleach\nUpgrading Bleach\nBasic use\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nBleach\n\n\n\n\n\n\nNOTE: 2023-01-23: Bleach is deprecated. See issue:\n#698\nBleach is an allowed-list-based HTML sanitizing library that escapes or strips\nmarkup and attributes.\nBleach can also linkify text safely, applying filters that Django's urlize\nfilter cannot, and optionally setting rel attributes, even on links already\nin the text.\nBleach is intended for sanitizing text from untrusted sources. If you find\nyourself jumping through hoops to allow your site administrators to do lots of\nthings, you're probably outside the use cases. Either trust those users, or\ndon't.\nBecause it relies on html5lib, Bleach is as good as modern browsers at dealing\nwith weird, quirky HTML fragments. And any of Bleach's methods will fix\nunbalanced or mis-nested tags.\nThe version on GitHub is the most up-to-date and contains the latest bug\nfixes. You can find full documentation on ReadTheDocs.\n\n\nCode:https://github.com/mozilla/bleach\n\nDocumentation:https://bleach.readthedocs.io/\n\nIssue tracker:https://github.com/mozilla/bleach/issues\n\nLicense:Apache License v2; see LICENSE file\n\n\n\n\nReporting Bugs\nFor regular bugs, please report them in our issue tracker.\nIf you believe that you've found a security vulnerability, please file a secure\nbug report in our bug tracker\nor send an email to security AT mozilla DOT org.\nFor more information on security-related bug disclosure and the PGP key to use\nfor sending encrypted mail or to verify responses received from that address,\nplease read our wiki page at\nhttps://www.mozilla.org/en-US/security/#For_Developers.\n\nSecurity\nBleach is a security-focused library.\nWe have a responsible security vulnerability reporting process. Please use\nthat if you're reporting a security issue.\nSecurity issues are fixed in private. After we land such a fix, we'll do a\nrelease.\nFor every release, we mark security issues we've fixed in the CHANGES in\nthe Security issues section. We include any relevant CVE links.\n\nInstalling Bleach\nBleach is available on PyPI, so you can install it with pip:\n$ pip install bleach\n\n\nUpgrading Bleach\n\nWarning\nBefore doing any upgrades, read through Bleach Changes for backwards\nincompatible changes, newer versions, etc.\nBleach follows semver 2 versioning. Vendored libraries will not\nbe changed in patch releases.\n\n\nBasic use\nThe simplest way to use Bleach is:\n>>> import bleach\n\n>>> bleach.clean('an <script>evil()</script> example')\nu'an &lt;script&gt;evil()&lt;/script&gt; example'\n\n>>> bleach.linkify('an http://example.com url')\nu'an <a href=\"http://example.com\" rel=\"nofollow\">http://example.com</a> url'\n\nCode of Conduct\nThis project and repository is governed by Mozilla's code of conduct and\netiquette guidelines. For more details please see the CODE_OF_CONDUCT.md\n\n\n", "description": "HTML sanitizing and text linkification library."}, {"name": "beautifulsoup4", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n", "description": "HTML/XML parser for quick web scraping."}, {"name": "bcrypt", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nbcrypt\nInstallation\nAlternatives\nChangelog\n5.0.0 (UNRELEASED)\n4.0.1\n4.0.0\n3.2.2\n3.2.1\n3.2.0\n3.1.7\n3.1.6\n3.1.5\n3.1.4\n3.1.3\n3.1.2\n3.1.1\n3.1.0\n3.0.0\n2.0.0\nUsage\nPassword Hashing\nKDF\nAdjustable Work Factor\nAdjustable Prefix\nMaximum Password Length\nCompatibility\nC Code\nSecurity\n\n\n\n\n\nREADME.rst\n\n\n\n\n\nbcrypt\n\n\n\nAcceptable password hashing for your software and your servers (but you should\nreally use argon2id or scrypt)\n\nInstallation\nTo install bcrypt, simply:\n$ pip install bcrypt\nNote that bcrypt should build very easily on Linux provided you have a C\ncompiler and a Rust compiler (the minimum supported Rust version is 1.56.0).\nFor Debian and Ubuntu, the following command will ensure that the required dependencies are installed:\n$ sudo apt-get install build-essential cargo\nFor Fedora and RHEL-derivatives, the following command will ensure that the required dependencies are installed:\n$ sudo yum install gcc cargo\nFor Alpine, the following command will ensure that the required dependencies are installed:\n$ apk add --update musl-dev gcc cargo\n\nAlternatives\nWhile bcrypt remains an acceptable choice for password storage, depending on your specific use case you may also want to consider using scrypt (either via standard library or cryptography) or argon2id via argon2_cffi.\n\nChangelog\n\n5.0.0 (UNRELEASED)\n\nDropped support for Python 3.6.\nBumped MSRV to 1.60.\n\n\n4.0.1\n\nWe now build PyPy manylinux wheels.\nFixed a bug where passing an invalid salt to checkpw could result in\na pyo3_runtime.PanicException. It now correctly raises a ValueError.\n\n\n4.0.0\n\nbcrypt is now implemented in Rust. Users building from source will need\nto have a Rust compiler available. Nothing will change for users downloading\nwheels.\nWe no longer ship manylinux2010 wheels. Users should upgrade to the latest\npip to ensure this doesn\u2019t cause issues downloading wheels on their\nplatform. We now ship manylinux_2_28 wheels for users on new enough platforms.\nNUL bytes are now allowed in inputs.\n\n\n3.2.2\n\nFixed packaging of py.typed files in wheels so that mypy works.\n\n\n3.2.1\n\nAdded support for compilation on z/OS\nThe next release of bcrypt with be 4.0 and it will require Rust at\ncompile time, for users building from source. There will be no additional\nrequirement for users who are installing from wheels. Users on most\nplatforms will be able to obtain a wheel by making sure they have an up to\ndate pip. The minimum supported Rust version will be 1.56.0.\nThis will be the final release for which we ship manylinux2010 wheels.\nGoing forward the minimum supported manylinux ABI for our wheels will be\nmanylinux2014. The vast majority of users will continue to receive\nmanylinux wheels provided they have an up to date pip.\n\n\n3.2.0\n\nAdded typehints for library functions.\nDropped support for Python versions less than 3.6 (2.7, 3.4, 3.5).\nShipped abi3 Windows wheels (requires pip >= 20).\n\n\n3.1.7\n\nSet a setuptools lower bound for PEP517 wheel building.\nWe no longer distribute 32-bit manylinux1 wheels. Continuing to produce\nthem was a maintenance burden.\n\n\n3.1.6\n\nAdded support for compilation on Haiku.\n\n\n3.1.5\n\nAdded support for compilation on AIX.\nDropped Python 2.6 and 3.3 support.\nSwitched to using abi3 wheels for Python 3. If you are not getting a\nwheel on a compatible platform please upgrade your pip version.\n\n\n3.1.4\n\nFixed compilation with mingw and on illumos.\n\n\n3.1.3\n\nFixed a compilation issue on Solaris.\nAdded a warning when using too few rounds with kdf.\n\n\n3.1.2\n\nFixed a compile issue affecting big endian platforms.\nFixed invalid escape sequence warnings on Python 3.6.\nFixed building in non-UTF8 environments on Python 2.\n\n\n3.1.1\n\nResolved a UserWarning when used with cffi 1.8.3.\n\n\n3.1.0\n\nAdded support for checkpw, a convenience method for verifying a password.\nEnsure that you get a $2y$ hash when you input a $2y$ salt.\nFixed a regression where $2a hashes were vulnerable to a wraparound bug.\nFixed compilation under Alpine Linux.\n\n\n3.0.0\n\nSwitched the C backend to code obtained from the OpenBSD project rather than\nopenwall.\nAdded support for bcrypt_pbkdf via the kdf function.\n\n\n2.0.0\n\nAdded support for an adjustible prefix when calling gensalt.\nSwitched to CFFI 1.0+\n\n\nUsage\n\nPassword Hashing\nHashing and then later checking that a password matches the previous hashed\npassword is very simple:\n>>> import bcrypt\n>>> password = b\"super secret password\"\n>>> # Hash a password for the first time, with a randomly-generated salt\n>>> hashed = bcrypt.hashpw(password, bcrypt.gensalt())\n>>> # Check that an unhashed password matches one that has previously been\n>>> # hashed\n>>> if bcrypt.checkpw(password, hashed):\n...     print(\"It Matches!\")\n... else:\n...     print(\"It Does not Match :(\")\n\nKDF\nAs of 3.0.0 bcrypt now offers a kdf function which does bcrypt_pbkdf.\nThis KDF is used in OpenSSH's newer encrypted private key format.\n>>> import bcrypt\n>>> key = bcrypt.kdf(\n...     password=b'password',\n...     salt=b'salt',\n...     desired_key_bytes=32,\n...     rounds=100)\n\nAdjustable Work Factor\nOne of bcrypt's features is an adjustable logarithmic work factor. To adjust\nthe work factor merely pass the desired number of rounds to\nbcrypt.gensalt(rounds=12) which defaults to 12):\n>>> import bcrypt\n>>> password = b\"super secret password\"\n>>> # Hash a password for the first time, with a certain number of rounds\n>>> hashed = bcrypt.hashpw(password, bcrypt.gensalt(14))\n>>> # Check that a unhashed password matches one that has previously been\n>>> #   hashed\n>>> if bcrypt.checkpw(password, hashed):\n...     print(\"It Matches!\")\n... else:\n...     print(\"It Does not Match :(\")\n\nAdjustable Prefix\nAnother one of bcrypt's features is an adjustable prefix to let you define what\nlibraries you'll remain compatible with. To adjust this, pass either 2a or\n2b (the default) to bcrypt.gensalt(prefix=b\"2b\") as a bytes object.\nAs of 3.0.0 the $2y$ prefix is still supported in hashpw but deprecated.\n\nMaximum Password Length\nThe bcrypt algorithm only handles passwords up to 72 characters, any characters\nbeyond that are ignored. To work around this, a common approach is to hash a\npassword with a cryptographic hash (such as sha256) and then base64\nencode it to prevent NULL byte problems before hashing the result with\nbcrypt:\n>>> password = b\"an incredibly long password\" * 10\n>>> hashed = bcrypt.hashpw(\n...     base64.b64encode(hashlib.sha256(password).digest()),\n...     bcrypt.gensalt()\n... )\n\nCompatibility\nThis library should be compatible with py-bcrypt and it will run on Python\n3.6+, and PyPy 3.\n\nC Code\nThis library uses code from OpenBSD.\n\nSecurity\nbcrypt follows the same security policy as cryptography, if you\nidentify a vulnerability, we ask you to contact us privately.\n\n\n", "description": "Modern password hashing library implementing Bcrypt."}, {"name": "basemap", "readme": "\nbasemap\nPlot on map projections (with coastlines and political boundaries) using\nmatplotlib.\nThis package depends on the support package basemap-data with the\nbasic basemap data assets, and optionally on the support package\nbasemap-data-hires with high-resolution data assets.\nInstallation\nPrecompiled binary wheels for Windows and GNU/Linux are available in\nPyPI (architectures x86 and x64, Python 2.7 and 3.5+) and can be\ninstalled with pip:\npython -m pip install basemap\n\nIf you need to install from source, please visit the\nGitHub repository for a\nstep-by-step description.\nLicense\nThe library is licensed under the terms of the MIT license (see\nLICENSE). The GEOS dynamic library bundled with the package wheels\nis provided under the terms of the LGPLv2.1 license as given in\nLICENSE.geos.\n"}, {"name": "basemap-data", "readme": "\nbasemap-data\nPlot on map projections (with coastlines and political boundaries) using\nmatplotlib.\nThis is a support package for basemap with the basic data assets\nrequired by basemap to work.\nInstallation\nThe package is available in PyPI and can be installed with pip:\npython -m pip install basemap-data\n\nLicense\nThe land-sea mask, coastline, lake, river and political boundary data\nare extracted from the GSHHG datasets (version 2.3.6) using GMT\n(5.x series) and are included under the terms of the LGPLv3+ license\n(see COPYING and COPYING.LESSER).\nThe other files are included under the terms of the MIT license. See\nLICENSE.epsg for the EPSG file (taken from the PROJ.4 package) and\nLICENSE.mit for the rest.\n"}, {"name": "backports.zoneinfo", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nbackports.zoneinfo: Backport of the standard library module zoneinfo\nInstallation and depending on this library\nUse\nContributing\n\n\n\n\n\nREADME.md\n\n\n\n\nbackports.zoneinfo: Backport of the standard library module zoneinfo\nThis package was originally the reference implementation for PEP 615, which proposes support for the IANA time zone database in the standard library, and now serves as a backport to Python 3.6+ (including PyPy).\nThis exposes the backports.zoneinfo module, which is a backport of the zoneinfo module. The backport's documentation can be found on readthedocs.\nThe module uses the system time zone data if available, and falls back to the tzdata package (available on PyPI) if installed.\nInstallation and depending on this library\nThis module is called backports.zoneinfo on PyPI. To install it in your local environment, use:\npip install backports.zoneinfo\n\nOr (particularly on Windows), you can also use the tzdata extra (which basically just declares a dependency on tzdata, so this doesn't actually save you any typing \ud83d\ude05):\npip install backports.zoneinfo[tzdata]\n\nIf you want to use this in your application, it is best to use PEP 508 environment markers to declare a dependency conditional on the Python version:\nbackports.zoneinfo;python_version<\"3.9\"\n\nSupport for backports.zoneinfo in Python 3.9+ is currently minimal, since it is expected that you would use the standard library zoneinfo module instead.\nUse\nThe backports.zoneinfo module should be a drop-in replacement for the Python 3.9 standard library module zoneinfo. If you do not support anything earlier than Python 3.9, you do not need this library; if you are supporting Python 3.6+, you may want to use this idiom to \"fall back\" to backports.zoneinfo:\ntry:\n    import zoneinfo\nexcept ImportError:\n    from backports import zoneinfo\nTo get access to time zones with this module, construct a ZoneInfo object and attach it to your datetime:\n>>> from backports.zoneinfo import ZoneInfo\n>>> from datetime import datetime, timedelta, timezone\n>>> dt = datetime(1992, 3, 1, tzinfo=ZoneInfo(\"Europe/Minsk\"))\n>>> print(dt)\n1992-03-01 00:00:00+02:00\n>>> print(dt.utcoffset())\n2:00:00\n>>> print(dt.tzname())\nEET\nArithmetic works as expected without the need for a \"normalization\" step:\n>>> dt += timedelta(days=90)\n>>> print(dt)\n1992-05-30 00:00:00+03:00\n>>> dt.utcoffset()\ndatetime.timedelta(seconds=10800)\n>>> dt.tzname()\n'EEST'\nAmbiguous and imaginary times are handled using the fold attribute added in PEP 495:\n>>> dt = datetime(2020, 11, 1, 1, tzinfo=ZoneInfo(\"America/Chicago\"))\n>>> print(dt)\n2020-11-01 01:00:00-05:00\n>>> print(dt.replace(fold=1))\n2020-11-01 01:00:00-06:00\n\n>>> UTC = timezone.utc\n>>> print(dt.astimezone(UTC))\n2020-11-01 06:00:00+00:00\n>>> print(dt.replace(fold=1).astimezone(UTC))\n2020-11-01 07:00:00+00:00\nContributing\nCurrently we are not accepting contributions to this repository because we have not put the CLA in place and we would like to avoid complicating the process of adoption into the standard library. Contributions to CPython will eventually be backported to this repository \u2014 see the Python developer's guide for more information on how to contribute to CPython.\n\n\n"}, {"name": "backoff", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nbackoff\nExamples\n@backoff.on_exception\n@backoff.on_predicate\n@backoff.runtime\nJitter\nUsing multiple decorators\nRuntime Configuration\nEvent handlers\nAsynchronous code\nLogging configuration\n\n\n\n\n\nREADME.rst\n\n\n\n\n\nbackoff\n\n\n\n\n\n\n\n\n\nFunction decoration for backoff and retry\nThis module provides function decorators which can be used to wrap a\nfunction such that it will be retried until some condition is met. It\nis meant to be of use when accessing unreliable resources with the\npotential for intermittent failures i.e. network resources and external\nAPIs. Somewhat more generally, it may also be of use for dynamically\npolling resources for externally generated content.\nDecorators support both regular functions for synchronous code and\nasyncio's coroutines\nfor asynchronous code.\n\nExamples\nSince Kenneth Reitz's requests module\nhas become a defacto standard for synchronous HTTP clients in Python,\nnetworking examples below are written using it, but it is in no way required\nby the backoff module.\n\n@backoff.on_exception\nThe on_exception decorator is used to retry when a specified exception\nis raised. Here's an example using exponential backoff when any\nrequests exception is raised:\n@backoff.on_exception(backoff.expo,\n                      requests.exceptions.RequestException)\ndef get_url(url):\n    return requests.get(url)\nThe decorator will also accept a tuple of exceptions for cases where\nthe same backoff behavior is desired for more than one exception type:\n@backoff.on_exception(backoff.expo,\n                      (requests.exceptions.Timeout,\n                       requests.exceptions.ConnectionError))\ndef get_url(url):\n    return requests.get(url)\nGive Up Conditions\nOptional keyword arguments can specify conditions under which to give\nup.\nThe keyword argument max_time specifies the maximum amount\nof total time in seconds that can elapse before giving up.\n@backoff.on_exception(backoff.expo,\n                      requests.exceptions.RequestException,\n                      max_time=60)\ndef get_url(url):\n    return requests.get(url)\nKeyword argument max_tries specifies the maximum number of calls\nto make to the target function before giving up.\n@backoff.on_exception(backoff.expo,\n                      requests.exceptions.RequestException,\n                      max_tries=8,\n                      jitter=None)\ndef get_url(url):\n    return requests.get(url)\nIn some cases the raised exception instance itself may need to be\ninspected in order to determine if it is a retryable condition. The\ngiveup keyword arg can be used to specify a function which accepts\nthe exception and returns a truthy value if the exception should not\nbe retried:\ndef fatal_code(e):\n    return 400 <= e.response.status_code < 500\n\n@backoff.on_exception(backoff.expo,\n                      requests.exceptions.RequestException,\n                      max_time=300,\n                      giveup=fatal_code)\ndef get_url(url):\n    return requests.get(url)\nBy default, when a give up event occurs, the exception in question is reraised\nand so code calling an on_exception-decorated function may still\nneed to do exception handling. This behavior can optionally be disabled\nusing the raise_on_giveup keyword argument.\nIn the code below, requests.exceptions.RequestException will not be raised\nwhen giveup occurs. Note that the decorated function will return None in this\ncase, regardless of the logic in the on_exception handler.\ndef fatal_code(e):\n    return 400 <= e.response.status_code < 500\n\n@backoff.on_exception(backoff.expo,\n                      requests.exceptions.RequestException,\n                      max_time=300,\n                      raise_on_giveup=False,\n                      giveup=fatal_code)\ndef get_url(url):\n    return requests.get(url)\nThis is useful for non-mission critical code where you still wish to retry\nthe code inside of backoff.on_exception but wish to proceed with execution\neven if all retries fail.\n\n@backoff.on_predicate\nThe on_predicate decorator is used to retry when a particular\ncondition is true of the return value of the target function.  This may\nbe useful when polling a resource for externally generated content.\nHere's an example which uses a fibonacci sequence backoff when the\nreturn value of the target function is the empty list:\n@backoff.on_predicate(backoff.fibo, lambda x: x == [], max_value=13)\ndef poll_for_messages(queue):\n    return queue.get()\nExtra keyword arguments are passed when initializing the\nwait generator, so the max_value param above is passed as a keyword\narg when initializing the fibo generator.\nWhen not specified, the predicate param defaults to the falsey test,\nso the above can more concisely be written:\n@backoff.on_predicate(backoff.fibo, max_value=13)\ndef poll_for_message(queue):\n    return queue.get()\nMore simply, a function which continues polling every second until it\ngets a non-falsey result could be defined like like this:\n@backoff.on_predicate(backoff.constant, jitter=None, interval=1)\ndef poll_for_message(queue):\n    return queue.get()\nThe jitter is disabled in order to keep the polling frequency fixed.\n\n@backoff.runtime\nYou can also use the backoff.runtime generator to make use of the\nreturn value or thrown exception of the decorated method.\nFor example, to use the value in the Retry-After header of the response:\n@backoff.on_predicate(\n    backoff.runtime,\n    predicate=lambda r: r.status_code == 429,\n    value=lambda r: int(r.headers.get(\"Retry-After\")),\n    jitter=None,\n)\ndef get_url():\n    return requests.get(url)\n\nJitter\nA jitter algorithm can be supplied with the jitter keyword arg to\neither of the backoff decorators. This argument should be a function\naccepting the original unadulterated backoff value and returning it's\njittered counterpart.\nAs of version 1.2, the default jitter function backoff.full_jitter\nimplements the 'Full Jitter' algorithm as defined in the AWS\nArchitecture Blog's Exponential Backoff And Jitter post.\nNote that with this algorithm, the time yielded by the wait generator\nis actually the maximum amount of time to wait.\nPrevious versions of backoff defaulted to adding some random number of\nmilliseconds (up to 1s) to the raw sleep value. If desired, this\nbehavior is now available as backoff.random_jitter.\n\nUsing multiple decorators\nThe backoff decorators may also be combined to specify different\nbackoff behavior for different cases:\n@backoff.on_predicate(backoff.fibo, max_value=13)\n@backoff.on_exception(backoff.expo,\n                      requests.exceptions.HTTPError,\n                      max_time=60)\n@backoff.on_exception(backoff.expo,\n                      requests.exceptions.Timeout,\n                      max_time=300)\ndef poll_for_message(queue):\n    return queue.get()\n\nRuntime Configuration\nThe decorator functions on_exception and on_predicate are\ngenerally evaluated at import time. This is fine when the keyword args\nare passed as constant values, but suppose we want to consult a\ndictionary with configuration options that only become available at\nruntime. The relevant values are not available at import time. Instead,\ndecorator functions can be passed callables which are evaluated at\nruntime to obtain the value:\ndef lookup_max_time():\n    # pretend we have a global reference to 'app' here\n    # and that it has a dictionary-like 'config' property\n    return app.config[\"BACKOFF_MAX_TIME\"]\n\n@backoff.on_exception(backoff.expo,\n                      ValueError,\n                      max_time=lookup_max_time)\n\nEvent handlers\nBoth backoff decorators optionally accept event handler functions\nusing the keyword arguments on_success, on_backoff, and on_giveup.\nThis may be useful in reporting statistics or performing other custom\nlogging.\nHandlers must be callables with a unary signature accepting a dict\nargument. This dict contains the details of the invocation. Valid keys\ninclude:\n\ntarget: reference to the function or method being invoked\nargs: positional arguments to func\nkwargs: keyword arguments to func\ntries: number of invocation tries so far\nelapsed: elapsed time in seconds so far\nwait: seconds to wait (on_backoff handler only)\nvalue: value triggering backoff (on_predicate decorator only)\n\nA handler which prints the details of the backoff event could be\nimplemented like so:\ndef backoff_hdlr(details):\n    print (\"Backing off {wait:0.1f} seconds after {tries} tries \"\n           \"calling function {target} with args {args} and kwargs \"\n           \"{kwargs}\".format(**details))\n\n@backoff.on_exception(backoff.expo,\n                      requests.exceptions.RequestException,\n                      on_backoff=backoff_hdlr)\ndef get_url(url):\n    return requests.get(url)\nMultiple handlers per event type\nIn all cases, iterables of handler functions are also accepted, which\nare called in turn. For example, you might provide a simple list of\nhandler functions as the value of the on_backoff keyword arg:\n@backoff.on_exception(backoff.expo,\n                      requests.exceptions.RequestException,\n                      on_backoff=[backoff_hdlr1, backoff_hdlr2])\ndef get_url(url):\n    return requests.get(url)\nGetting exception info\nIn the case of the on_exception decorator, all on_backoff and\non_giveup handlers are called from within the except block for the\nexception being handled. Therefore exception info is available to the\nhandler functions via the python standard library, specifically\nsys.exc_info() or the traceback module. The exception is also\navailable at the exception key in the details dict passed to the\nhandlers.\n\nAsynchronous code\nBackoff supports asynchronous execution in Python 3.5 and above.\nTo use backoff in asynchronous code based on\nasyncio\nyou simply need to apply backoff.on_exception or backoff.on_predicate\nto coroutines.\nYou can also use coroutines for the on_success, on_backoff, and\non_giveup event handlers, with the interface otherwise being identical.\nThe following examples use aiohttp\nasynchronous HTTP client/server library.\n@backoff.on_exception(backoff.expo, aiohttp.ClientError, max_time=60)\nasync def get_url(url):\n    async with aiohttp.ClientSession(raise_for_status=True) as session:\n        async with session.get(url) as response:\n            return await response.text()\n\nLogging configuration\nBy default, backoff and retry attempts are logged to the 'backoff'\nlogger. By default, this logger is configured with a NullHandler, so\nthere will be nothing output unless you configure a handler.\nProgrammatically, this might be accomplished with something as simple\nas:\nlogging.getLogger('backoff').addHandler(logging.StreamHandler())\nThe default logging level is INFO, which corresponds to logging\nanytime a retry event occurs. If you would instead like to log\nonly when a giveup event occurs, set the logger level to ERROR.\nlogging.getLogger('backoff').setLevel(logging.ERROR)\nIt is also possible to specify an alternate logger with the logger\nkeyword argument.  If a string value is specified the logger will be\nlooked up by name.\n@backoff.on_exception(backoff.expo,\n                      requests.exceptions.RequestException,\n                      logger='my_logger')\n# ...\nIt is also supported to specify a Logger (or LoggerAdapter) object\ndirectly.\nmy_logger = logging.getLogger('my_logger')\nmy_handler = logging.StreamHandler()\nmy_logger.addHandler(my_handler)\nmy_logger.setLevel(logging.ERROR)\n\n@backoff.on_exception(backoff.expo,\n                      requests.exceptions.RequestException,\n                      logger=my_logger)\n# ...\nDefault logging can be disabled all together by specifying\nlogger=None. In this case, if desired alternative logging behavior\ncould be defined by using custom event handlers.\n\n\n", "description": "Function decoration for retrying."}, {"name": "backcall", "readme": "\n\n\n\nREADME.rst\n\n\n\n\nbackcall\n\nSpecifications for callback functions passed in to an API\nIf your code lets other people supply callback functions, it's important to\nspecify the function signature you expect, and check that functions support that.\nAdding extra parameters later would break other peoples code unless you're careful.\nbackcall provides a way of specifying the callback signature using a prototype\nfunction:\nfrom backcall import callback_prototype\n\n@callback_prototype\ndef handle_ping(sender, delay=None):\n    # Specify positional parameters without a default, and keyword\n    # parameters with a default.\n    pass\n\ndef register_ping_handler(callback):\n    # This checks and adapts the function passed in:\n    callback = handle_ping.adapt(callback)\n    ping_callbacks.append(callback)\n\nIf the callback takes fewer parameters than your prototype, backcall will wrap\nit in a function that discards the extra arguments. If the callback expects\nmore arguments, a TypeError is thrown when it is registered.\nFor more details, see the docs or\nthe Demo notebook.\nThe tests are run with pytest. In the root directory,\nexecute:\npy.test\n\n\n\n", "description": "Turn any Python function into a callable object that provides safety against interruption."}, {"name": "Babel", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nAbout Babel\nContributing to Babel\n\n\n\n\n\nREADME.rst\n\n\n\n\n\nAbout Babel\nBabel is a Python library that provides an integrated collection of\nutilities that assist with internationalizing and localizing Python\napplications (in particular web-based applications.)\nDetails can be found in the HTML files in the docs folder.\nFor more information please visit the Babel web site:\nhttp://babel.pocoo.org/\nJoin the chat at https://gitter.im/python-babel/babel\n\nContributing to Babel\nIf you want to contribute code to Babel, please take a look at our\nCONTRIBUTING.md.\nIf you know your way around Babels codebase a bit and like to help\nfurther, we would appreciate any help in reviewing pull requests. Please\ncontact us at https://gitter.im/python-babel/babel if you're interested!\n\n\n", "description": "Internationalization utilities."}, {"name": "audioread", "readme": "\n\n\n\n\n\n\n\n\n\n\n\naudioread\nExample\nTroubleshooting\nVersion History\nEt Cetera\n\n\n\n\n\nREADME.rst\n\n\n\n\naudioread\nDecode audio files using whichever backend is available. The library\ncurrently supports:\n\nGstreamer via PyGObject.\nCore Audio on Mac OS X via ctypes. (PyObjC not required.)\nMAD via the pymad bindings.\nFFmpeg or Libav via its command-line interface.\nThe standard library wave, aifc, and sunau modules (for\nuncompressed audio formats).\n\nUse the library like so:\nwith audioread.audio_open(filename) as f:\n    print(f.channels, f.samplerate, f.duration)\n    for buf in f:\n        do_something(buf)\n\nBuffers in the file can be accessed by iterating over the object returned from\naudio_open. Each buffer is a bytes-like object (buffer, bytes, or\nbytearray) containing raw 16-bit little-endian signed integer PCM\ndata. (Currently, these PCM format parameters are not configurable, but this\ncould be added to most of the backends.)\nAdditional values are available as fields on the audio file object:\n\nchannels is the number of audio channels (an integer).\nsamplerate is given in Hz (an integer).\nduration is the length of the audio in seconds (a float).\n\nThe audio_open function transparently selects a backend that can read the\nfile. (Each backend is implemented in a module inside the audioread\npackage.) If no backends succeed in opening the file, a DecodeError\nexception is raised. This exception is only used when the file type is\nunsupported by the backends; if the file doesn't exist, a standard IOError\nwill be raised.\nA second optional parameter to audio_open specifies which backends to try\n(instead of trying them all, which is the default). You can use the\navailable_backends function to get a list backends that are usable on the\ncurrent system.\nAudioread supports Python 3 (3.8+).\n\nExample\nThe included decode.py script demonstrates using this package to\nconvert compressed audio files to WAV files.\n\nTroubleshooting\nA NoBackendError exception means that the library could not find one of\nthe libraries or tools it needs to decode audio. This could mean, for example,\nthat you have a broken installation of FFmpeg. To check, try typing\nffmpeg -version in your shell. If that gives you an error, try installing\nFFmpeg with your OS's package manager (e.g., apt or yum) or using Conda.\n\nVersion History\n\n3.0.1\nFix a possible deadlock when FFmpeg's version output produces too much data.\n3.0.0\nDrop support for Python 2 and older versions of Python 3. The library now\nrequires Python 3.6+.\nIncrease default block size in FFmpegAudioFile to get slightly faster file reading.\nCache backends for faster lookup (thanks to @bmcfee).\nAudio file classes now inherit from a common base AudioFile class.\n2.1.9\nWork correctly with GStreamer 1.18 and later (thanks to @ssssam).\n2.1.8\nFix an unhandled OSError when FFmpeg is not installed.\n2.1.7\nProperly close some filehandles in the FFmpeg backend (thanks to\n@RyanMarcus and @ssssam).\nThe maddec backend now always produces bytes objects, like the other\nbackends (thanks to @ssssam).\nResolve an audio data memory leak in the GStreamer backend (thanks again to\n@ssssam).\nYou can now optionally specify which specific backends audio_open should\ntry (thanks once again to @ssssam).\nOn Windows, avoid opening a console window to run FFmpeg (thanks to @flokX).\n2.1.6\nFix a \"no such process\" crash in the FFmpeg backend on Windows Subsystem for\nLinux (thanks to @llamasoft).\nAvoid suppressing SIGINT in the GStreamer backend on older versions of\nPyGObject (thanks to @lazka).\n2.1.5\nProperly clean up the file handle when a backend fails to decode a file.\nFix parsing of \"N.M\" channel counts in the FFmpeg backend (thanks to @piem).\nAvoid a crash in the raw backend when a file uses an unsupported number of\nbits per sample (namely, 24-bit samples in Python < 3.4).\nAdd a __version__ value to the package.\n2.1.4\nFix a bug in the FFmpeg backend where, after closing a file, the program's\nstandard input stream would be \"broken\" and wouldn't receive any input.\n2.1.3\nAvoid some warnings in the GStreamer backend when using modern versions of\nGLib. We now require at least GLib 2.32.\n2.1.2\nFix a file descriptor leak when opening and closing many files using\nGStreamer.\n2.1.1\nJust fix ReST formatting in the README.\n2.1.0\nThe FFmpeg backend can now also use Libav's avconv command.\nFix a warning by requiring GStreamer >= 1.0.\nFix some Python 3 crashes with the new GStreamer backend (thanks to\n@xix-xeaon).\n2.0.0\nThe GStreamer backend now uses GStreamer 1.x via the new\ngobject-introspection API (and is compatible with Python 3).\n1.2.2\nWhen running FFmpeg on Windows, disable its crash dialog. Thanks to\njcsaaddupuy.\n1.2.1\nFix an unhandled exception when opening non-raw audio files (thanks to\naostanin).\nFix Python 3 compatibility for the raw-file backend.\n1.2.0\nAdd support for FFmpeg on Windows (thanks to Jean-Christophe Saad-Dupuy).\n1.1.0\nAdd support for Sun/NeXT Au files via the standard-library sunau\nmodule (thanks to Dan Ellis).\n1.0.3\nUse the rawread (standard-library) backend for .wav files.\n1.0.2\nSend SIGKILL, not SIGTERM, to ffmpeg processes to avoid occasional hangs.\n1.0.1\nWhen GStreamer fails to report a duration, raise an exception instead of\nsilently setting the duration field to None.\n1.0.0\nCatch GStreamer's exception when necessary components, such as\nuridecodebin, are missing.\nThe GStreamer backend now accepts relative paths.\nFix a hang in GStreamer when the stream finishes before it begins (when\nreading broken files).\nInitial support for Python 3.\n0.8\nAll decoding errors are now subclasses of DecodeError.\n0.7\nFix opening WAV and AIFF files via Unicode filenames.\n0.6\nMake FFmpeg timeout more robust.\nDump FFmpeg output on timeout.\nFix a nondeterministic hang in the Gstreamer backend.\nFix a file descriptor leak in the MAD backend.\n0.5\nFix crash when FFmpeg fails to report a duration.\nFix a hang when FFmpeg fills up its stderr output buffer.\nAdd a timeout to ffmpeg tool execution (currently 10 seconds for each\n4096-byte read); a ReadTimeoutError exception is raised if the tool times\nout.\n0.4\nFix channel count detection for FFmpeg backend.\n0.3\nFix a problem with the Gstreamer backend where audio files could be left open\neven after the GstAudioFile was \"closed\".\n0.2\nFix a hang in the GStreamer backend that occurs occasionally on some\nplatforms.\n0.1\nInitial release.\n\n\nEt Cetera\naudioread is by Adrian Sampson. It is made available under the MIT\nlicense. An alternative to this module is decoder.py.\n\n\n", "description": "Cross-library audio decoding for Python."}, {"name": "attrs", "readme": "\n\n\n\n\n\nattrs is the Python package that will bring back the joy of writing classes by relieving you from the drudgery of implementing object protocols (aka dunder methods).\nTrusted by NASA for Mars missions since 2020!\nIts main goal is to help you to write concise and correct software without slowing down your code.\nSponsors\nattrs would not be possible without our amazing sponsors.\nEspecially those generously supporting us at the The Organization tier and higher:\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPlease consider joining them to help make attrs\u2019s maintenance more sustainable!\n\nExample\nattrs gives you a class decorator and a way to declaratively define the attributes on that class:\n>>> from attrs import asdict, define, make_class, Factory\n\n>>> @define\n... class SomeClass:\n...     a_number: int = 42\n...     list_of_numbers: list[int] = Factory(list)\n...\n...     def hard_math(self, another_number):\n...         return self.a_number + sum(self.list_of_numbers) * another_number\n\n\n>>> sc = SomeClass(1, [1, 2, 3])\n>>> sc\nSomeClass(a_number=1, list_of_numbers=[1, 2, 3])\n\n>>> sc.hard_math(3)\n19\n>>> sc == SomeClass(1, [1, 2, 3])\nTrue\n>>> sc != SomeClass(2, [3, 2, 1])\nTrue\n\n>>> asdict(sc)\n{'a_number': 1, 'list_of_numbers': [1, 2, 3]}\n\n>>> SomeClass()\nSomeClass(a_number=42, list_of_numbers=[])\n\n>>> C = make_class(\"C\", [\"a\", \"b\"])\n>>> C(\"foo\", \"bar\")\nC(a='foo', b='bar')\n\nAfter declaring your attributes, attrs gives you:\n\na concise and explicit overview of the class's attributes,\na nice human-readable __repr__,\nequality-checking methods,\nan initializer,\nand much more,\n\nwithout writing dull boilerplate code again and again and without runtime performance penalties.\nHate type annotations!?\nNo problem!\nTypes are entirely optional with attrs.\nSimply assign attrs.field() to the attributes instead of annotating them with types.\n\nThis example uses attrs's modern APIs that have been introduced in version 20.1.0, and the attrs package import name that has been added in version 21.3.0.\nThe classic APIs (@attr.s, attr.ib, plus their serious-business aliases) and the attr package import name will remain indefinitely.\nPlease check out On The Core API Names for a more in-depth explanation.\nData Classes\nOn the tin, attrs might remind you of dataclasses (and indeed, dataclasses are a descendant of attrs).\nIn practice it does a lot more and is more flexible.\nFor instance it allows you to define special handling of NumPy arrays for equality checks, or allows more ways to plug into the initialization process.\nFor more details, please refer to our comparison page.\nProject Information\n\nChangelog\nDocumentation\nPyPI\nSource Code\nContributing\nThird-party Extensions\nLicense: MIT\nGet Help: please use the python-attrs tag on StackOverflow\nSupported Python Versions: 3.7 and later\n\nattrs for Enterprise\nAvailable as part of the Tidelift Subscription.\nThe maintainers of attrs and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source packages you use to build your applications.\nSave time, reduce risk, and improve code health, while paying the maintainers of the exact packages you use.\nLearn more.\nRelease Information\nBackwards-incompatible Changes\n\nPython 3.6 has been dropped and packaging switched to static package data using Hatch.\n#993\n\nDeprecations\n\n\nThe support for zope-interface via the attrs.validators.provides validator is now deprecated and will be removed in, or after, April 2024.\nThe presence of a C-based package in our developement dependencies has caused headaches and we're not under the impression it's used a lot.\nLet us know if you're using it and we might publish it as a separate package.\n#1120\n\n\nChanges\n\n\nattrs.filters.exclude() and attrs.filters.include() now support the passing of attribute names as strings.\n#1068\n\n\nattrs.has() and attrs.fields() now handle generic classes correctly.\n#1079\n\n\nFix frozen exception classes when raised within e.g. contextlib.contextmanager, which mutates their __traceback__ attributes.\n#1081\n\n\n@frozen now works with type checkers that implement PEP-681 (ex. pyright).\n#1084\n\n\nRestored ability to unpickle instances pickled before 22.2.0.\n#1085\n\n\nattrs.asdict()'s and attrs.astuple()'s type stubs now accept the attrs.AttrsInstance protocol.\n#1090\n\n\nFix slots class cellvar updating closure in CPython 3.8+ even when __code__ introspection is unavailable.\n#1092\n\n\nattrs.resolve_types() can now pass include_extras to typing.get_type_hints() on Python 3.9+, and does so by default.\n#1099\n\n\nAdded instructions for pull request workflow to CONTRIBUTING.md.\n#1105\n\n\nAdded type parameter to attrs.field() function for use with attrs.make_class().\nPlease note that type checkers ignore type metadata passed into make_class(), but it can be useful if you're wrapping attrs.\n#1107\n\n\nIt is now possible for attrs.evolve() (and attr.evolve()) to change fields named inst if the instance is passed as a positional argument.\nPassing the instance using the inst keyword argument is now deprecated and will be removed in, or after, April 2024.\n#1117\n\n\nattrs.validators.optional() now also accepts a tuple of validators (in addition to lists of validators).\n#1122\n\n\n\nFull changelog\n", "description": "Attributes without boilerplate."}, {"name": "async-timeout", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nasync-timeout\nUsage example\nInstallation\nAuthors and License\n\n\n\n\n\nREADME.rst\n\n\n\n\nasync-timeout\n\n\n\n\n\n\n\nasyncio-compatible timeout context manager.\n\nUsage example\nThe context manager is useful in cases when you want to apply timeout\nlogic around block of code or in cases when asyncio.wait_for() is\nnot suitable. Also it's much faster than asyncio.wait_for()\nbecause timeout doesn't create a new task.\nThe timeout(delay, *, loop=None) call returns a context manager\nthat cancels a block on timeout expiring:\nfrom async_timeout import timeout\nasync with timeout(1.5):\n    await inner()\n\n\nIf inner() is executed faster than in 1.5 seconds nothing\nhappens.\nOtherwise inner() is cancelled internally by sending\nasyncio.CancelledError into but asyncio.TimeoutError is\nraised outside of context manager scope.\n\ntimeout parameter could be None for skipping timeout functionality.\nAlternatively, timeout_at(when) can be used for scheduling\nat the absolute time:\nloop = asyncio.get_event_loop()\nnow = loop.time()\n\nasync with timeout_at(now + 1.5):\n    await inner()\n\nPlease note: it is not POSIX time but a time with\nundefined starting base, e.g. the time of the system power on.\nContext manager has .expired property for check if timeout happens\nexactly in context manager:\nasync with timeout(1.5) as cm:\n    await inner()\nprint(cm.expired)\n\nThe property is True if inner() execution is cancelled by\ntimeout context manager.\nIf inner() call explicitly raises TimeoutError cm.expired\nis False.\nThe scheduled deadline time is available as .deadline property:\nasync with timeout(1.5) as cm:\n    cm.deadline\n\nNot finished yet timeout can be rescheduled by shift_by()\nor shift_to() methods:\nasync with timeout(1.5) as cm:\n    cm.shift(1)  # add another second on waiting\n    cm.update(loop.time() + 5)  # reschedule to now+5 seconds\n\nRescheduling is forbidden if the timeout is expired or after exit from async with\ncode block.\n\nInstallation\n$ pip install async-timeout\n\nThe library is Python 3 only!\n\nAuthors and License\nThe module is written by Andrew Svetlov.\nIt's Apache 2 licensed and freely available.\n\n\n"}, {"name": "asttokens", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nASTTokens\nInstallation\nUsage\nContribute\n\n\n\n\n\nREADME.rst\n\n\n\n\nASTTokens\n\n\n\n\n\n\n\n\n\nThe asttokens module annotates Python abstract syntax trees (ASTs) with the positions of tokens\nand text in the source code that generated them.\nIt makes it possible for tools that work with logical AST nodes to find the particular text that\nresulted in those nodes, for example for automated refactoring or highlighting.\n\nInstallation\nasttokens is available on PyPI: https://pypi.python.org/pypi/asttokens/:\npip install asttokens\n\nThe code is on GitHub: https://github.com/gristlabs/asttokens.\nThe API Reference is here: http://asttokens.readthedocs.io/en/latest/api-index.html.\n\nUsage\nASTTokens works with both Python2 and Python3.\nASTTokens can annotate both trees built by ast,\nAND those built by astroid.\nHere's an example:\nimport asttokens, ast\nsource = \"Robot('blue').walk(steps=10*n)\"\natok = asttokens.ASTTokens(source, parse=True)\nOnce the tree has been marked, nodes get .first_token, .last_token attributes, and\nthe ASTTokens object offers helpful methods:\nattr_node = next(n for n in ast.walk(atok.tree) if isinstance(n, ast.Attribute))\nprint(atok.get_text(attr_node))\nstart, end = attr_node.last_token.startpos, attr_node.last_token.endpos\nprint(atok.text[:start] + 'RUN' + atok.text[end:])\nWhich produces this output:\nRobot('blue').walk\nRobot('blue').RUN(steps=10*n)\n\nThe ASTTokens object also offers methods to walk and search the list of tokens that make up\nthe code (or a particular AST node), which is more useful and powerful than dealing with the text\ndirectly.\n\nContribute\nTo contribute:\n\nFork this repository, and clone your fork.\n\nInstall the package with test dependencies (ideally in a virtualenv) with:\npip install -e '.[test]'\n\n\nRun tests in your current interpreter with the command pytest or python -m pytest.\n\nRun tests across all supported interpreters with the tox command. You will need to have the interpreters installed separately. We recommend pyenv for that. Use tox -p auto to run the tests in parallel.\n\nBy default certain tests which take a very long time to run are skipped, but they are run on travis CI. To run them locally, set the environment variable ASTTOKENS_SLOW_TESTS. For example run ASTTOKENS_SLOW_TESTS=1 tox to run the full suite of tests.\n\n\n\n\n", "description": "Annotate AST trees with source text positions"}, {"name": "asn1crypto", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nasn1crypto\nSponsorship\nCorporate\nPersonal\nFeatures\nWhy Another Python ASN.1 Library?\nRelated Crypto Libraries\nCurrent Release\nDependencies\nInstallation\nLicense\nSecurity Policy\nDocumentation\nTutorials\nReference\nContinuous Integration\nTesting\nGit Repository\nPyPi Source Distribution\nPackage\nDevelopment\nCI Tasks\n\n\n\n\n\nreadme.md\n\n\n\n\nasn1crypto\nA fast, pure Python library for parsing and serializing ASN.1 structures.\n\nSponsorship\nFeatures\nWhy Another Python ASN.1 Library?\nRelated Crypto Libraries\nCurrent Release\nDependencies\nInstallation\nLicense\nSecurity Policy\nDocumentation\nContinuous Integration\nTesting\nDevelopment\nCI Tasks\n\n\n\n\nSponsorship\nA quick thank you to all of the sponsors who donate to support the asn1crypto project:\nCorporate\n\nAuxpex Labs\nGitHub\n\nPersonal\n\nNothing4You\nsthagen\n\nFeatures\nIn addition to an ASN.1 BER/DER decoder and DER serializer, the project includes\na bunch of ASN.1 structures for use with various common cryptography standards:\n\n\n\nStandard\nModule\nSource\n\n\n\n\nX.509\nasn1crypto.x509\nRFC 5280\n\n\nCRL\nasn1crypto.crl\nRFC 5280\n\n\nCSR\nasn1crypto.csr\nRFC 2986, RFC 2985\n\n\nOCSP\nasn1crypto.ocsp\nRFC 6960\n\n\nPKCS#12\nasn1crypto.pkcs12\nRFC 7292\n\n\nPKCS#8\nasn1crypto.keys\nRFC 5208\n\n\nPKCS#1 v2.1 (RSA keys)\nasn1crypto.keys\nRFC 3447\n\n\nDSA keys\nasn1crypto.keys\nRFC 3279\n\n\nElliptic curve keys\nasn1crypto.keys\nSECG SEC1 V2\n\n\nPKCS#3 v1.4\nasn1crypto.algos\nPKCS#3 v1.4\n\n\nPKCS#5 v2.1\nasn1crypto.algos\nPKCS#5 v2.1\n\n\nCMS (and PKCS#7)\nasn1crypto.cms\nRFC 5652, RFC 2315\n\n\nTSP\nasn1crypto.tsp\nRFC 3161\n\n\nPDF signatures\nasn1crypto.pdf\nPDF 1.7\n\n\n\nWhy Another Python ASN.1 Library?\nPython has long had the pyasn1 and\npyasn1_modules available for\nparsing and serializing ASN.1 structures. While the project does include a\ncomprehensive set of tools for parsing and serializing, the performance of the\nlibrary can be very poor, especially when dealing with bit fields and parsing\nlarge structures such as CRLs.\nAfter spending extensive time using pyasn1, the following issues were\nidentified:\n\nPoor performance\nVerbose, non-pythonic API\nOut-dated and incomplete definitions in pyasn1-modules\nNo simple way to map data to native Python data structures\nNo mechanism for overridden universal ASN.1 types\n\nThe pyasn1 API is largely method driven, and uses extensive configuration\nobjects and lowerCamelCase names. There were no consistent options for\nconverting types of native Python data structures. Since the project supports\nout-dated versions of Python, many newer language features are unavailable\nfor use.\nTime was spent trying to profile issues with the performance, however the\narchitecture made it hard to pin down the primary source of the poor\nperformance. Attempts were made to improve performance by utilizing unreleased\npatches and delaying parsing using the Any type. Even with such changes, the\nperformance was still unacceptably slow.\nFinally, a number of structures in the cryptographic space use universal data\ntypes such as BitString and OctetString, but interpret the data as other\ntypes. For instance, signatures are really byte strings, but are encoded as\nBitString. Elliptic curve keys use both BitString and OctetString to\nrepresent integers. Parsing these structures as the base universal types and\nthen re-interpreting them wastes computation.\nasn1crypto uses the following techniques to improve performance, especially\nwhen extracting one or two fields from large, complex structures:\n\nDelayed parsing of byte string values\nPersistence of original ASN.1 encoded data until a value is changed\nLazy loading of child fields\nUtilization of high-level Python stdlib modules\n\nWhile there is no extensive performance test suite, the\nCRLTests.test_parse_crl test case was used to parse a 21MB CRL file on a\nlate 2013 rMBP. asn1crypto parsed the certificate serial numbers in just\nunder 8 seconds. With pyasn1, using definitions from pyasn1-modules, the\nsame parsing took over 4,100 seconds.\nFor smaller structures the performance difference can range from a few times\nfaster to an order of magnitude or more.\nRelated Crypto Libraries\nasn1crypto is part of the modularcrypto family of Python packages:\n\nasn1crypto\noscrypto\ncsrbuilder\ncertbuilder\ncrlbuilder\nocspbuilder\ncertvalidator\n\nCurrent Release\n1.5.1 - changelog\nDependencies\nPython 2.6, 2.7, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 3.10, 3.11 or pypy.\nNo third-party packages required.\nInstallation\npip install asn1crypto\nLicense\nasn1crypto is licensed under the terms of the MIT license. See the\nLICENSE file for the exact license text.\nSecurity Policy\nThe security policies for this project are covered in\nSECURITY.md.\nDocumentation\nThe documentation for asn1crypto is composed of tutorials on basic usage and\nlinks to the source for the various pre-defined type classes.\nTutorials\n\nUniversal Types with BER/DER Decoder and DER Encoder\nPEM Encoder and Decoder\n\nReference\n\nUniversal types, asn1crypto.core\nDigest, HMAC, signed digest and encryption algorithms, asn1crypto.algos\nPrivate and public keys, asn1crypto.keys\nX509 certificates, asn1crypto.x509\nCertificate revocation lists (CRLs), asn1crypto.crl\nOnline certificate status protocol (OCSP), asn1crypto.ocsp\nCertificate signing requests (CSRs), asn1crypto.csr\nPrivate key/certificate containers (PKCS#12), asn1crypto.pkcs12\nCryptographic message syntax (CMS, PKCS#7), asn1crypto.cms\nTime stamp protocol (TSP), asn1crypto.tsp\nPDF signatures, asn1crypto.pdf\n\nContinuous Integration\nVarious combinations of platforms and versions of Python are tested via:\n\nmacOS, Linux, Windows via GitHub Actions\narm64 via CircleCI\n\nTesting\nTests are written using unittest and require no third-party packages.\nDepending on what type of source is available for the package, the following\ncommands can be used to run the test suite.\nGit Repository\nWhen working within a Git working copy, or an archive of the Git repository,\nthe full test suite is run via:\npython run.py tests\nTo run only some tests, pass a regular expression as a parameter to tests.\npython run.py tests ocsp\nPyPi Source Distribution\nWhen working within an extracted source distribution (aka .tar.gz) from\nPyPi, the full test suite is run via:\npython setup.py test\nPackage\nWhen the package has been installed via pip (or another method), the package\nasn1crypto_tests may be installed and invoked to run the full test suite:\npip install asn1crypto_tests\npython -m asn1crypto_tests\nDevelopment\nTo install the package used for linting, execute:\npip install --user -r requires/lint\nThe following command will run the linter:\npython run.py lint\nSupport for code coverage can be installed via:\npip install --user -r requires/coverage\nCoverage is measured by running:\npython run.py coverage\nTo change the version number of the package, run:\npython run.py version {pep440_version}\nTo install the necessary packages for releasing a new version on PyPI, run:\npip install --user -r requires/release\nReleases are created by:\n\n\nMaking a git tag in PEP 440 format\n\n\nRunning the command:\npython run.py release\n\n\nExisting releases can be found at https://pypi.org/project/asn1crypto/.\nCI Tasks\nA task named deps exists to download and stage all necessary testing\ndependencies. On posix platforms, curl is used for downloads and on Windows\nPowerShell with Net.WebClient is used. This configuration sidesteps issues\nrelated to getting pip to work properly and messing with site-packages for\nthe version of Python being used.\nThe ci task runs lint (if flake8 is available for the version of Python) and\ncoverage (or tests if coverage is not available for the version of Python).\nIf the current directory is a clean git working copy, the coverage data is\nsubmitted to codecov.io.\npython run.py deps\npython run.py ci\n\n\n", "description": "ASN.1 parser and serializer with definitions for private keys, public keys, certificates, CRL, OCSP, CMS, PKCS#3, PKCS#7, PKCS#8, PKCS#12, PKCS#5, X.509 and TSP."}, {"name": "arviz", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nArviZ in other languages\nDocumentation\nInstallation\nStable\nDevelopment\nGallery\nCitation\nContributions\nCode of Conduct\nDonations\nSponsors\n\n\n\n\n\nREADME.md\n\n\n\n\n\n\n\n\n\n\n\n \n\nArviZ (pronounced \"AR-vees\") is a Python package for exploratory analysis of Bayesian models.\nIncludes functions for posterior analysis, data storage, model checking, comparison and diagnostics.\nArviZ in other languages\nArviZ also has a Julia wrapper available ArviZ.jl.\nDocumentation\nThe ArviZ documentation can be found in the official docs.\nFirst time users may find the quickstart\nto be helpful. Additional guidance can be found in the\nuser guide.\nInstallation\nStable\nArviZ is available for installation from PyPI.\nThe latest stable version can be installed using pip:\npip install arviz\n\nArviZ is also available through conda-forge.\nconda install -c conda-forge arviz\n\nDevelopment\nThe latest development version can be installed from the main branch using pip:\npip install git+git://github.com/arviz-devs/arviz.git\n\nAnother option is to clone the repository and install using git and setuptools:\ngit clone https://github.com/arviz-devs/arviz.git\ncd arviz\npython setup.py install\n\n\nGallery\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAnd more...\n\n## Dependencies\nArviZ is tested on Python 3.9, 3.10 and 3.11, and depends on NumPy, SciPy, xarray, and Matplotlib.\nCitation\nIf you use ArviZ and want to cite it please use \nHere is the citation in BibTeX format\n@article{arviz_2019,\n  doi = {10.21105/joss.01143},\n  url = {https://doi.org/10.21105/joss.01143},\n  year = {2019},\n  publisher = {The Open Journal},\n  volume = {4},\n  number = {33},\n  pages = {1143},\n  author = {Ravin Kumar and Colin Carroll and Ari Hartikainen and Osvaldo Martin},\n  title = {ArviZ a unified library for exploratory analysis of Bayesian models in Python},\n  journal = {Journal of Open Source Software}\n}\n\nContributions\nArviZ is a community project and welcomes contributions.\nAdditional information can be found in the Contributing Readme\nCode of Conduct\nArviZ wishes to maintain a positive community. Additional details\ncan be found in the Code of Conduct\nDonations\nArviZ is a non-profit project under NumFOCUS umbrella. If you want to support ArviZ financially, you can donate here.\nSponsors\n\n\n\n", "description": "Exploratory analysis of Bayesian models"}, {"name": "argon2-cffi", "readme": "\nargon2-cffi: Argon2 for Python\nArgon2 won the Password Hashing Competition and argon2-cffi is the simplest way to use it in Python:\n>>> from argon2 import PasswordHasher\n>>> ph = PasswordHasher()\n>>> hash = ph.hash(\"correct horse battery staple\")\n>>> hash  # doctest: +SKIP\n'$argon2id$v=19$m=65536,t=3,p=4$MIIRqgvgQbgj220jfp0MPA$YfwJSVjtjSU0zzV/P3S9nnQ/USre2wvJMjfCIjrTQbg'\n>>> ph.verify(hash, \"correct horse battery staple\")\nTrue\n>>> ph.check_needs_rehash(hash)\nFalse\n>>> ph.verify(hash, \"Tr0ub4dor&3\")\nTraceback (most recent call last):\n  ...\nargon2.exceptions.VerifyMismatchError: The password does not match the supplied hash\n\nProject Links\n\nPyPI\nGitHub\nDocumentation\nChangelog\nFunding\nThe low-level Argon2 CFFI bindings are maintained in the separate argon2-cffi-bindings project.\n\nRelease Information\nRemoved\n\nPython 3.6 is not supported anymore.\n\nDeprecated\n\n\nThe InvalidHash exception is deprecated in favor of InvalidHashError.\nNo plans for removal currently exist and the names can (but shouldn't) be used interchangeably.\n\n\nargon2.hash_password(), argon2.hash_password_raw(), and argon2.verify_password() that have been soft-deprecated since 2016 are now hard-deprecated.\nThey now raise DeprecationWarnings and will be removed in 2024.\n\n\nAdded\n\n\nOfficial support for Python 3.11 and 3.12.\nNo code changes were necessary.\n\n\nargon2.exceptions.InvalidHashError as a replacement for InvalidHash.\n\n\nsalt parameter to argon2.PasswordHasher.hash() to allow for custom salts.\nThis is only useful for specialized use-cases -- leave it on None unless you know exactly what you are doing.\n#153\n\n\n\n\u2192 Full Changelog\nCredits\nargon2-cffi is maintained by Hynek Schlawack.\nThe development is kindly supported by my employer Variomedia AG, argon2-cffi Tidelift subscribers, and my amazing GitHub Sponsors.\nargon2-cffi for Enterprise\nAvailable as part of the Tidelift Subscription.\nThe maintainers of argon2-cffi and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open-source packages you use to build your applications.\nSave time, reduce risk, and improve code health, while paying the maintainers of the exact packages you use.\nLearn more.\n"}, {"name": "argon2-cffi-bindings", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nLow-level Python CFFI Bindings for Argon2\nUsage\nDisabling Vendored Code\nOverriding Automatic SSE2 Detection\nPython API\nProject Information\nCredits & License\nVendored Code\nargon2-cffi-bindings for Enterprise\n\n\n\n\n\nREADME.md\n\n\n\n\nLow-level Python CFFI Bindings for Argon2\n\n\n\nargon2-cffi-bindings provides low-level CFFI bindings to the official implementation of the Argon2 password hashing algorithm.\nThe currently vendored Argon2 commit ID is f57e61e.\nNote\nIf you want to hash passwords in an application, this package is not for you.\nHave a look at argon2-cffi with its high-level abstractions!\nThese bindings have been extracted from argon2-cffi and it remains its main consumer.\nHowever, they may be used by other packages that want to use the Argon2 library without dealing with C-related complexities.\nUsage\nargon2-cffi-bindings is available from PyPI.\nThe provided CFFI bindings are compiled in API mode.\nBest effort is given to provide binary wheels for as many platforms as possible.\nDisabling Vendored Code\nA copy of Argon2 is vendored and used by default, but can be disabled if argon2-cffi-bindings is installed using:\n$ env ARGON2_CFFI_USE_SYSTEM=1 \\\n  python -Im pip install --no-binary=argon2-cffi-bindings argon2-cffi-bindings\nOverriding Automatic SSE2 Detection\nUsually the build process tries to guess whether or not it should use SSE2-optimized code (see _ffi_build.py for details).\nThis can go wrong and is problematic for cross-compiling.\nTherefore you can use the ARGON2_CFFI_USE_SSE2 environment variable to control the process:\n\nIf you set it to 1, argon2-cffi-bindings will build with SSE2 support.\nIf you set it to 0, argon2-cffi-bindings will build without SSE2 support.\nIf you set it to anything else, it will be ignored and argon2-cffi-bindings will try to guess.\n\nHowever, if our heuristics fail you, we would welcome a bug report.\nPython API\nSince this package is intended to be an implementation detail, it uses a private module name to prevent your users from using it by accident.\nTherefore you have to import the symbols from _argon2_cffi_bindings:\nfrom _argon2_cffi_bindings import ffi, lib\nPlease refer to cffi documentation on how to use the ffi and lib objects.\nThe list of symbols that are provided can be found in the _ffi_build.py file.\nProject Information\n\nChangelog\nDocumentation\nPyPI\nSource Code\n\nCredits & License\nargon2-cffi-bindings is written and maintained by Hynek Schlawack.\nIt is released under the MIT license.\nThe development is kindly supported by Variomedia AG.\nThe authors of Argon2 were very helpful to get the library to compile on ancient versions of Visual Studio for ancient versions of Python.\nThe documentation quotes frequently in verbatim from the Argon2 paper to avoid mistakes by rephrasing.\nVendored Code\nThe original Argon2 repo can be found at https://github.com/P-H-C/phc-winner-argon2/.\nExcept for the components listed below, the Argon2 code in this repository is copyright (c) 2015 Daniel Dinu, Dmitry Khovratovich (main authors), Jean-Philippe Aumasson and Samuel Neves, and under CC0 license.\nThe string encoding routines in src/encoding.c are copyright (c) 2015 Thomas Pornin, and under CC0 license.\nThe BLAKE2 code in src/blake2/ is copyright (c) Samuel Neves, 2013-2015, and under CC0 license.\nargon2-cffi-bindings for Enterprise\nAvailable as part of the Tidelift Subscription.\nThe maintainers of argon2-cffi-bindings and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open-source packages you use to build your applications.\nSave time, reduce risk, and improve code health, while paying the maintainers of the exact packages you use.\nLearn more.\n\n\n"}, {"name": "argcomplete", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nargcomplete - Bash/zsh tab completion for argparse\nInstallation\nSynopsis\nargcomplete.autocomplete(parser)\nSpecifying completers\nReadline-style completers\nPrinting warnings in completers\nUsing a custom completion validator\nGlobal completion\nActivating global completion\nZsh Support\nPython Support\nSupport for other shells\nCommon Problems\nDebugging\nAcknowledgments\nLinks\nBugs\nLicense\n\n\n\n\n\nREADME.rst\n\n\n\n\nargcomplete - Bash/zsh tab completion for argparse\nTab complete all the things!\nArgcomplete provides easy, extensible command line tab completion of arguments for your Python application.\nIt makes two assumptions:\n\nYou're using bash or zsh as your shell\nYou're using argparse to manage your command line arguments/options\n\nArgcomplete is particularly useful if your program has lots of options or subparsers, and if your program can\ndynamically suggest completions for your argument/option values (for example, if the user is browsing resources over\nthe network).\n\nInstallation\npip install argcomplete\nactivate-global-python-argcomplete\n\nSee Activating global completion below for details about the second step.\nRefresh your shell environment (start a new shell).\n\nSynopsis\nAdd the PYTHON_ARGCOMPLETE_OK marker and a call to argcomplete.autocomplete() to your Python application as\nfollows:\n#!/usr/bin/env python\n# PYTHON_ARGCOMPLETE_OK\nimport argcomplete, argparse\nparser = argparse.ArgumentParser()\n...\nargcomplete.autocomplete(parser)\nargs = parser.parse_args()\n...\nRegister your Python application with your shell's completion framework by running register-python-argcomplete:\neval \"$(register-python-argcomplete my-python-app)\"\n\nQuotes are significant; the registration will fail without them. See Global completion below for a way to enable\nargcomplete generally without registering each application individually.\n\nargcomplete.autocomplete(parser)\nThis method is the entry point to the module. It must be called after ArgumentParser construction is complete, but\nbefore the ArgumentParser.parse_args() method is called. The method looks for an environment variable that the\ncompletion hook shellcode sets, and if it's there, collects completions, prints them to the output stream (fd 8 by\ndefault), and exits. Otherwise, it returns to the caller immediately.\n\nSide effects\nArgcomplete gets completions by running your program. It intercepts the execution flow at the moment\nargcomplete.autocomplete() is called. After sending completions, it exits using exit_method (os._exit\nby default). This means if your program has any side effects that happen before argcomplete is called, those\nside effects will happen every time the user presses <TAB> (although anything your program prints to stdout or\nstderr will be suppressed). For this reason it's best to construct the argument parser and call\nargcomplete.autocomplete() as early as possible in your execution flow.\n\n\nPerformance\nIf the program takes a long time to get to the point where argcomplete.autocomplete() is called, the tab completion\nprocess will feel sluggish, and the user may lose confidence in it. So it's also important to minimize the startup time\nof the program up to that point (for example, by deferring initialization or importing of large modules until after\nparsing options).\n\n\nSpecifying completers\nYou can specify custom completion functions for your options and arguments. Two styles are supported: callable and\nreadline-style. Callable completers are simpler. They are called with the following keyword arguments:\n\nprefix: The prefix text of the last word before the cursor on the command line.\nFor dynamic completers, this can be used to reduce the work required to generate possible completions.\naction: The argparse.Action instance that this completer was called for.\nparser: The argparse.ArgumentParser instance that the action was taken by.\nparsed_args: The result of argument parsing so far (the argparse.Namespace args object normally returned by\nArgumentParser.parse_args()).\n\nCompleters can return their completions as an iterable of strings or a mapping (dict) of strings to their\ndescriptions (zsh will display the descriptions as context help alongside completions). An example completer for names\nof environment variables might look like this:\ndef EnvironCompleter(**kwargs):\n    return os.environ\nTo specify a completer for an argument or option, set the completer attribute of its associated action. An easy\nway to do this at definition time is:\nfrom argcomplete.completers import EnvironCompleter\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--env-var1\").completer = EnvironCompleter\nparser.add_argument(\"--env-var2\").completer = EnvironCompleter\nargcomplete.autocomplete(parser)\nIf you specify the choices keyword for an argparse option or argument (and don't specify a completer), it will be\nused for completions.\nA completer that is initialized with a set of all possible choices of values for its action might look like this:\nclass ChoicesCompleter(object):\n    def __init__(self, choices):\n        self.choices = choices\n\n    def __call__(self, **kwargs):\n        return self.choices\nThe following two ways to specify a static set of choices are equivalent for completion purposes:\nfrom argcomplete.completers import ChoicesCompleter\n\nparser.add_argument(\"--protocol\", choices=('http', 'https', 'ssh', 'rsync', 'wss'))\nparser.add_argument(\"--proto\").completer=ChoicesCompleter(('http', 'https', 'ssh', 'rsync', 'wss'))\nNote that if you use the choices=<completions> option, argparse will show\nall these choices in the --help output by default. To prevent this, set\nmetavar (like parser.add_argument(\"--protocol\", metavar=\"PROTOCOL\",\nchoices=('http', 'https', 'ssh', 'rsync', 'wss'))).\nThe following script uses\nparsed_args and Requests to query GitHub for publicly known members of an\norganization and complete their names, then prints the member description:\n#!/usr/bin/env python\n# PYTHON_ARGCOMPLETE_OK\nimport argcomplete, argparse, requests, pprint\n\ndef github_org_members(prefix, parsed_args, **kwargs):\n    resource = \"https://api.github.com/orgs/{org}/members\".format(org=parsed_args.organization)\n    return (member['login'] for member in requests.get(resource).json() if member['login'].startswith(prefix))\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--organization\", help=\"GitHub organization\")\nparser.add_argument(\"--member\", help=\"GitHub member\").completer = github_org_members\n\nargcomplete.autocomplete(parser)\nargs = parser.parse_args()\n\npprint.pprint(requests.get(\"https://api.github.com/users/{m}\".format(m=args.member)).json())\nTry it like this:\n./describe_github_user.py --organization heroku --member <TAB>\n\nIf you have a useful completer to add to the completer library, send a pull request!\n\nReadline-style completers\nThe readline module defines a completer protocol in rlcompleter. Readline-style completers are also supported by\nargcomplete, so you can use the same completer object both in an interactive readline-powered shell and on the command\nline. For example, you can use the readline-style completer provided by IPython to get introspective completions like\nyou would get in the IPython shell:\nimport IPython\nparser.add_argument(\"--python-name\").completer = IPython.core.completer.Completer()\nargcomplete.CompletionFinder.rl_complete can also be used to plug in an argparse parser as a readline completer.\n\nPrinting warnings in completers\nNormal stdout/stderr output is suspended when argcomplete runs. Sometimes, though, when the user presses <TAB>, it's\nappropriate to print information about why completions generation failed. To do this, use warn:\nfrom argcomplete import warn\n\ndef AwesomeWebServiceCompleter(prefix, **kwargs):\n    if login_failed:\n        warn(\"Please log in to Awesome Web Service to use autocompletion\")\n    return completions\n\nUsing a custom completion validator\nBy default, argcomplete validates your completions by checking if they start with the prefix given to the completer. You\ncan override this validation check by supplying the validator keyword to argcomplete.autocomplete():\ndef my_validator(completion_candidate, current_input):\n    \"\"\"Complete non-prefix substring matches.\"\"\"\n    return current_input in completion_candidate\n\nargcomplete.autocomplete(parser, validator=my_validator)\n\nGlobal completion\nIn global completion mode, you don't have to register each argcomplete-capable executable separately. Instead, the shell\nwill look for the string PYTHON_ARGCOMPLETE_OK in the first 1024 bytes of any executable that it's running\ncompletion for, and if it's found, follow the rest of the argcomplete protocol as described above.\nAdditionally, completion is activated for scripts run as python <script> and python -m <module>. If you're using\nmultiple Python versions on the same system, the version being used to run the script must have argcomplete installed.\n\nBash version compatibility\nWhen using bash, global completion requires bash support for complete -D, which was introduced in bash 4.2. Since\nMac OS ships with an outdated version of Bash (3.2), you can either use zsh or install a newer version of bash using\nHomebrew (brew install bash - you will also need to add /usr/local/bin/bash to\n/etc/shells, and run chsh to change your shell). You can check the version of the running copy of bash with\necho $BASH_VERSION.\n\n\nNote\nIf you use setuptools/distribute scripts or entry_points directives to package your module,\nargcomplete will follow the wrapper scripts to their destination and look for PYTHON_ARGCOMPLETE_OK in the\ndestination code.\n\nIf you choose not to use global completion, or ship a completion module that depends on argcomplete, you must register\nyour script explicitly using eval \"$(register-python-argcomplete my-python-app)\". Standard completion module\nregistration rules apply: namely, the script name is passed directly to complete, meaning it is only tab completed\nwhen invoked exactly as it was registered. In the above example, my-python-app must be on the path, and the user\nmust be attempting to complete it by that name. The above line alone would not allow you to complete\n./my-python-app, or /path/to/my-python-app.\n\nActivating global completion\nThe script activate-global-python-argcomplete installs the global completion script\nbash_completion.d/_python-argcomplete\ninto an appropriate location on your system for both bash and zsh. The specific location depends on your platform and\nwhether you installed argcomplete system-wide using sudo or locally (into your user's home directory).\n\nZsh Support\nArgcomplete supports zsh. On top of plain completions like in bash, zsh allows you to see argparse help strings as\ncompletion descriptions. All shellcode included with argcomplete is compatible with both bash and zsh, so the same\ncompleter commands activate-global-python-argcomplete and eval \"$(register-python-argcomplete my-python-app)\"\nwork for zsh as well.\n\nPython Support\nArgcomplete requires Python 3.7+.\n\nSupport for other shells\nArgcomplete maintainers provide support only for the bash and zsh shells on Linux and MacOS. For resources related to\nother shells and platforms, including fish, tcsh, xonsh, powershell, and Windows, please see the\ncontrib directory.\n\nCommon Problems\nIf global completion is not completing your script, bash may have registered a default completion function:\n$ complete | grep my-python-app\ncomplete -F _minimal my-python-app\n\nYou can fix this by restarting your shell, or by running complete -r my-python-app.\n\nDebugging\nSet the _ARC_DEBUG variable in your shell to enable verbose debug output every time argcomplete runs. This will\ndisrupt the command line composition state of your terminal, but make it possible to see the internal state of the\ncompleter if it encounters problems.\n\nAcknowledgments\nInspired and informed by the optcomplete module by Martin Blais.\n\nLinks\n\nProject home page (GitHub)\nDocumentation\nPackage distribution (PyPI)\nChange log\n\n\nBugs\nPlease report bugs, issues, feature requests, etc. on GitHub.\n\nLicense\nCopyright 2012-2023, Andrey Kislyuk and argcomplete contributors. Licensed under the terms of the\nApache License, Version 2.0. Distribution of the LICENSE and NOTICE\nfiles with source copies of this package and derivative works is REQUIRED as specified by the Apache License.\n\n\n\n\n\n\n\n\n\n", "description": "Bash tab completion for argparse."}, {"name": "anytree", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nLinks\nGetting started\nDocumentation\nInstallation\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nLinks\n\nDocumentation\nPyPI\nGitHub\nChangelog\nIssues\nContributors\nIf you enjoy anytree\n\n\n\nGetting started\nUsage is simple.\nConstruction\n>>> from anytree import Node, RenderTree\n>>> udo = Node(\"Udo\")\n>>> marc = Node(\"Marc\", parent=udo)\n>>> lian = Node(\"Lian\", parent=marc)\n>>> dan = Node(\"Dan\", parent=udo)\n>>> jet = Node(\"Jet\", parent=dan)\n>>> jan = Node(\"Jan\", parent=dan)\n>>> joe = Node(\"Joe\", parent=dan)\nNode\n>>> print(udo)\nNode('/Udo')\n>>> print(joe)\nNode('/Udo/Dan/Joe')\nTree\n>>> for pre, fill, node in RenderTree(udo):\n...     print(\"%s%s\" % (pre, node.name))\nUdo\n\u251c\u2500\u2500 Marc\n\u2502   \u2514\u2500\u2500 Lian\n\u2514\u2500\u2500 Dan\n    \u251c\u2500\u2500 Jet\n    \u251c\u2500\u2500 Jan\n    \u2514\u2500\u2500 Joe\nFor details see Node and RenderTree.\nVisualization\n>>> from anytree.exporter import UniqueDotExporter\n>>> # graphviz needs to be installed for the next line!\n>>> UniqueDotExporter(udo).to_picture(\"udo.png\")\n\nThe UniqueDotExporter can be started at any node and has various formatting hookups:\n>>> UniqueDotExporter(dan,\n...                   nodeattrfunc=lambda node: \"fixedsize=true, width=1, height=1, shape=diamond\",\n...                   edgeattrfunc=lambda parent, child: \"style=bold\"\n... ).to_picture(\"dan.png\")\n\nThere are various other Importers and Exporters.\nManipulation\nA second tree:\n>>> mary = Node(\"Mary\")\n>>> urs = Node(\"Urs\", parent=mary)\n>>> chris = Node(\"Chris\", parent=mary)\n>>> marta = Node(\"Marta\", parent=mary)\n>>> print(RenderTree(mary))\nNode('/Mary')\n\u251c\u2500\u2500 Node('/Mary/Urs')\n\u251c\u2500\u2500 Node('/Mary/Chris')\n\u2514\u2500\u2500 Node('/Mary/Marta')\nAppend:\n>>> udo.parent = mary\n>>> print(RenderTree(mary))\nNode('/Mary')\n\u251c\u2500\u2500 Node('/Mary/Urs')\n\u251c\u2500\u2500 Node('/Mary/Chris')\n\u251c\u2500\u2500 Node('/Mary/Marta')\n\u2514\u2500\u2500 Node('/Mary/Udo')\n    \u251c\u2500\u2500 Node('/Mary/Udo/Marc')\n    \u2502   \u2514\u2500\u2500 Node('/Mary/Udo/Marc/Lian')\n    \u2514\u2500\u2500 Node('/Mary/Udo/Dan')\n        \u251c\u2500\u2500 Node('/Mary/Udo/Dan/Jet')\n        \u251c\u2500\u2500 Node('/Mary/Udo/Dan/Jan')\n        \u2514\u2500\u2500 Node('/Mary/Udo/Dan/Joe')\nSubtree rendering:\n>>> print(RenderTree(marc))\nNode('/Mary/Udo/Marc')\n\u2514\u2500\u2500 Node('/Mary/Udo/Marc/Lian')\nCut:\n>>> dan.parent = None\n>>> print(RenderTree(dan))\nNode('/Dan')\n\u251c\u2500\u2500 Node('/Dan/Jet')\n\u251c\u2500\u2500 Node('/Dan/Jan')\n\u2514\u2500\u2500 Node('/Dan/Joe')\nExtending any python class to become a tree node\nThe entire tree magic is encapsulated by NodeMixin\nadd it as base class and the class becomes a tree node:\n>>> from anytree import NodeMixin, RenderTree\n>>> class MyBaseClass(object):  # Just an example of a base class\n...     foo = 4\n>>> class MyClass(MyBaseClass, NodeMixin):  # Add Node feature\n...     def __init__(self, name, length, width, parent=None, children=None):\n...         super(MyClass, self).__init__()\n...         self.name = name\n...         self.length = length\n...         self.width = width\n...         self.parent = parent\n...         if children:\n...             self.children = children\nJust set the parent attribute to reflect the tree relation:\n>>> my0 = MyClass('my0', 0, 0)\n>>> my1 = MyClass('my1', 1, 0, parent=my0)\n>>> my2 = MyClass('my2', 0, 2, parent=my0)\n>>> for pre, fill, node in RenderTree(my0):\n...     treestr = u\"%s%s\" % (pre, node.name)\n...     print(treestr.ljust(8), node.length, node.width)\nmy0      0 0\n\u251c\u2500\u2500 my1  1 0\n\u2514\u2500\u2500 my2  0 2\nThe children can be used likewise:\n>>> my0 = MyClass('my0', 0, 0, children=[\n...     MyClass('my1', 1, 0),\n...     MyClass('my2', 0, 2),\n... ])\n>>> for pre, fill, node in RenderTree(my0):\n...     treestr = u\"%s%s\" % (pre, node.name)\n...     print(treestr.ljust(8), node.length, node.width)\nmy0      0 0\n\u251c\u2500\u2500 my1  1 0\n\u2514\u2500\u2500 my2  0 2\n\nDocumentation\nPlease see the Documentation for all details.\n\nInstallation\nTo install the anytree module run:\npip install anytree\n\nIf you do not have write-permissions to the python installation, try:\npip install anytree --user\n\n\n\n", "description": "Convert Python classes into tree nodes with children"}, {"name": "anyio", "readme": "\n\n\n\n\nAnyIO is an asynchronous networking and concurrency library that works on top of either asyncio or\ntrio. It implements trio-like structured concurrency (SC) on top of asyncio and works in harmony\nwith the native SC of trio itself.\nApplications and libraries written against AnyIO\u2019s API will run unmodified on either asyncio or\ntrio. AnyIO can also be adopted into a library or application incrementally \u2013 bit by bit, no full\nrefactoring necessary. It will blend in with the native libraries of your chosen backend.\n\nDocumentation\nView full documentation at: https://anyio.readthedocs.io/\n\n\nFeatures\nAnyIO offers the following functionality:\n\nTask groups (nurseries in trio terminology)\nHigh-level networking (TCP, UDP and UNIX sockets)\n\nHappy eyeballs algorithm for TCP connections (more robust than that of asyncio on Python\n3.8)\nasync/await style UDP sockets (unlike asyncio where you still have to use Transports and\nProtocols)\n\n\nA versatile API for byte streams and object streams\nInter-task synchronization and communication (locks, conditions, events, semaphores, object\nstreams)\nWorker threads\nSubprocesses\nAsynchronous file I/O (using worker threads)\nSignal handling\n\nAnyIO also comes with its own pytest plugin which also supports asynchronous fixtures.\nIt even works with the popular Hypothesis library.\n\n", "description": "High level compatibility layer for multiple asynchronous event loop implementations.", "category": "Async"}, {"name": "analytics-python", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nanalytics-python\n\ud83d\ude80 How to get started\n\ud83e\udd14 Why?\n\ud83d\udc68\u200d\ud83d\udcbb Getting Started\n\ud83d\ude80 Startup Program\nDocumentation\nLicense\n\n\n\n\n\nREADME.md\n\n\n\n\nanalytics-python\n\nanalytics-python is a python client for Segment\n\n\nYou can't fix what you can't measure\n\nAnalytics helps you measure your users, product, and business. It unlocks insights into your app's funnel, core business metrics, and whether you have a product-market fit.\n\ud83d\ude80 How to get started\n\nCollect analytics data from your app(s).\n\nThe top 200 Segment companies collect data from 5+ source types (web, mobile, server, CRM, etc.).\n\n\nSend the data to analytics tools (for example, Google Analytics, Amplitude, Mixpanel).\n\nOver 250+ Segment companies send data to eight categories of destinations such as analytics tools, warehouses, email marketing, and remarketing systems, session recording, and more.\n\n\nExplore your data by creating metrics (for example, new signups, retention cohorts, and revenue generation).\n\nThe best Segment companies use retention cohorts to measure product-market fit. Netflix has 70% paid retention after 12 months, 30% after 7 years.\n\n\n\nSegment collects analytics data and allows you to send it to more than 250 apps (such as Google Analytics, Mixpanel, Optimizely, Facebook Ads, Slack, Sentry) just by flipping a switch. You only need one Segment code snippet, and you can turn integrations on and off at will, with no additional code. Sign up with Segment today.\n\ud83e\udd14 Why?\n\n\nPower all your analytics apps with the same data. Instead of writing code to integrate all of your tools individually, send data to Segment, once.\n\n\nInstall tracking for the last time. We're the last integration you'll ever need to write. You only need to instrument Segment once. Reduce all of your tracking code and advertising tags into a single set of API calls.\n\n\nSend data from anywhere. Send Segment data from any device, and we'll transform and send it on to any tool.\n\n\nQuery your data in SQL. Slice, dice, and analyze your data in detail with Segment SQL. We'll transform and load your customer behavioral data directly from your apps into Amazon Redshift, Google BigQuery, or Postgres. Save weeks of engineering time by not having to invent your data warehouse and ETL pipeline.\nFor example, you can capture data on any app:\nanalytics.track('Order Completed', { price: 99.84 })\nThen, query the resulting data in SQL:\nselect * from app.order_completed\norder by price desc\n\n\n\ud83d\udc68\u200d\ud83d\udcbb Getting Started\nInstall segment-analytics-python using pip:\npip3 install segment-analytics-python\nor you can clone this repo:\ngit clone https://github.com/segmentio/analytics-python.git\n\ncd analytics-python\n\nsudo python3 setup.py install\nNow inside your app, you'll want to set your write_key before making any analytics calls:\nimport segment.analytics as analytics\n\nanalytics.write_key = 'YOUR_WRITE_KEY'\nNote If you need to send data to multiple Segment sources, you can initialize a new Client for each write_key\n\ud83d\ude80 Startup Program\n\n\n\nIf you are part of a new startup  (<$5M raised, <2 years since founding), we just launched a new startup program for you. You can get a Segment Team plan  (up to $25,000 value in Segment credits) for free up to 2 years \u2014 apply here!\nDocumentation\nDocumentation is available at https://segment.com/libraries/python.\nLicense\nWWWWWW||WWWWWW\n W W W||W W W\n      ||\n    ( OO )__________\n     /  |           \\\n    /o o|    MIT     \\\n    \\___/||_||__||_|| *\n         || ||  || ||\n        _||_|| _||_||\n       (__|__|(__|__|\n\n(The MIT License)\nCopyright (c) 2013 Segment Inc. friends@segment.com\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\n\n"}, {"name": "aiosignal", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\naiosignal\nIntroduction\nInstallation\nDocumentation\nCommunication channels\nRequirements\nLicense\nSource code\n\n\n\n\n\nREADME.rst\n\n\n\n\naiosignal\n\n\n\n\n\n\n\n\n\n\n\nIntroduction\nA project to manage callbacks in asyncio projects.\nSignal is a list of registered asynchronous callbacks.\nThe signal's life-cycle has two stages: after creation its content\ncould be filled by using standard list operations: sig.append()\netc.\nAfter you call sig.freeze() the signal is frozen: adding, removing\nand dropping callbacks is forbidden.\nThe only available operation is calling the previously registered\ncallbacks by using await sig.send(data).\nFor concrete usage examples see the Signals\n<https://docs.aiohttp.org/en/stable/web_advanced.html#aiohttp-web-signals>\nsection of the `Web Server Advanced\n<https://docs.aiohttp.org/en/stable/web_advanced.html> chapter of the aiohttp\ndocumentation.\n\nInstallation\n$ pip install aiosignal\n\nThe library requires Python 3.8 or newer.\n\nDocumentation\nhttps://aiosignal.readthedocs.io/\n\nCommunication channels\ngitter chat https://gitter.im/aio-libs/Lobby\n\nRequirements\n\nPython >= 3.8\nfrozenlist >= 1.0.0\n\n\nLicense\naiosignal is offered under the Apache 2 license.\n\nSource code\nThe project is hosted on GitHub\nPlease file an issue in the bug tracker if you have found a bug\nor have some suggestions to improve the library.\n\n\n", "description": "Manage callback registration in asyncio"}, {"name": "aiohttp", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAsync http client/server framework\nKey Features\nGetting started\nClient\nServer\nDocumentation\nDemos\nExternal links\nCommunication channels\nRequirements\nLicense\nKeepsafe\nSource code\nBenchmarks\n\n\n\n\n\nREADME.rst\n\n\n\n\nAsync http client/server framework\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nKey Features\n\nSupports both client and server side of HTTP protocol.\nSupports both client and server Web-Sockets out-of-the-box and avoids\nCallback Hell.\nProvides Web-server with middlewares and plugable routing.\n\n\nGetting started\n\nClient\nTo get something from the web:\nimport aiohttp\nimport asyncio\n\nasync def main():\n\n    async with aiohttp.ClientSession() as session:\n        async with session.get('http://python.org') as response:\n\n            print(\"Status:\", response.status)\n            print(\"Content-type:\", response.headers['content-type'])\n\n            html = await response.text()\n            print(\"Body:\", html[:15], \"...\")\n\nasyncio.run(main())\nThis prints:\nStatus: 200\nContent-type: text/html; charset=utf-8\nBody: <!doctype html> ...\n\nComing from requests ? Read why we need so many lines.\n\nServer\nAn example using a simple server:\n# examples/server_simple.py\nfrom aiohttp import web\n\nasync def handle(request):\n    name = request.match_info.get('name', \"Anonymous\")\n    text = \"Hello, \" + name\n    return web.Response(text=text)\n\nasync def wshandle(request):\n    ws = web.WebSocketResponse()\n    await ws.prepare(request)\n\n    async for msg in ws:\n        if msg.type == web.WSMsgType.text:\n            await ws.send_str(\"Hello, {}\".format(msg.data))\n        elif msg.type == web.WSMsgType.binary:\n            await ws.send_bytes(msg.data)\n        elif msg.type == web.WSMsgType.close:\n            break\n\n    return ws\n\n\napp = web.Application()\napp.add_routes([web.get('/', handle),\n                web.get('/echo', wshandle),\n                web.get('/{name}', handle)])\n\nif __name__ == '__main__':\n    web.run_app(app)\n\nDocumentation\nhttps://aiohttp.readthedocs.io/\n\nDemos\nhttps://github.com/aio-libs/aiohttp-demos\n\nExternal links\n\nThird party libraries\nBuilt with aiohttp\nPowered by aiohttp\n\nFeel free to make a Pull Request for adding your link to these pages!\n\nCommunication channels\naio-libs Discussions: https://github.com/aio-libs/aiohttp/discussions\ngitter chat https://gitter.im/aio-libs/Lobby\nWe support Stack Overflow.\nPlease add aiohttp tag to your question there.\n\nRequirements\n\nasync-timeout\nmultidict\nyarl\nfrozenlist\n\nOptionally you may install the aiodns library (highly recommended for sake of speed).\n\nLicense\naiohttp is offered under the Apache 2 license.\n\nKeepsafe\nThe aiohttp community would like to thank Keepsafe\n(https://www.getkeepsafe.com) for its support in the early days of\nthe project.\n\nSource code\nThe latest developer version is available in a GitHub repository:\nhttps://github.com/aio-libs/aiohttp\n\nBenchmarks\nIf you are interested in efficiency, the AsyncIO community maintains a\nlist of benchmarks on the official wiki:\nhttps://github.com/python/asyncio/wiki/Benchmarks\n\n\n", "description": "Async HTTP client/server framework for asyncio", "category": "Web"}, {"name": "affine", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nAffine\nUsage\nUsage with GIS data packages\n\n\n\n\n\nREADME.rst\n\n\n\n\nAffine\nMatrices describing 2D affine transformation of the plane.\n\n\n\n\nThe Affine package is derived from Casey Duncan's Planar package. Please see\nthe copyright statement in affine/__init__.py.\n\nUsage\nThe 3x3 augmented affine transformation matrix for transformations in two\ndimensions is illustrated below.\n| x' |   | a  b  c | | x |\n| y' | = | d  e  f | | y |\n| 1  |   | 0  0  1 | | 1 |\n\nMatrices can be created by passing the values a, b, c, d, e, f to the\naffine.Affine constructor or by using its identity(),\ntranslation(), scale(), shear(), and rotation() class methods.\n>>> from affine import Affine\n>>> Affine.identity()\nAffine(1.0, 0.0, 0.0,\n       0.0, 1.0, 0.0)\n>>> Affine.translation(1.0, 5.0)\nAffine(1.0, 0.0, 1.0,\n       0.0, 1.0, 5.0)\n>>> Affine.scale(2.0)\nAffine(2.0, 0.0, 0.0,\n       0.0, 2.0, 0.0)\n>>> Affine.shear(45.0, 45.0)  # decimal degrees\nAffine(1.0, 0.9999999999999999, 0.0,\n       0.9999999999999999, 1.0, 0.0)\n>>> Affine.rotation(45.0)     # decimal degrees\nAffine(0.7071067811865476, -0.7071067811865475, 0.0,\n       0.7071067811865475, 0.7071067811865476, 0.0)\nThese matrices can be applied to (x, y) tuples to obtain transformed\ncoordinates (x', y').\n>>> Affine.translation(1.0, 5.0) * (1.0, 1.0)\n(2.0, 6.0)\n>>> Affine.rotation(45.0) * (1.0, 1.0)\n(1.1102230246251565e-16, 1.414213562373095)\nThey may also be multiplied together to combine transformations.\n>>> Affine.translation(1.0, 5.0) * Affine.rotation(45.0)\nAffine(0.7071067811865476, -0.7071067811865475, 1.0,\n       0.7071067811865475, 0.7071067811865476, 5.0)\n\nUsage with GIS data packages\nGeoreferenced raster datasets use affine transformations to map from image\ncoordinates to world coordinates. The affine.Affine.from_gdal() class\nmethod helps convert GDAL GeoTransform,\nsequences of 6 numbers in which the first and fourth are the x and y offsets\nand the second and sixth are the x and y pixel sizes.\nUsing a GDAL dataset transformation matrix, the world coordinates (x, y)\ncorresponding to the top left corner of the pixel 100 rows down from the\norigin can be easily computed.\n>>> geotransform = (-237481.5, 425.0, 0.0, 237536.4, 0.0, -425.0)\n>>> fwd = Affine.from_gdal(*geotransform)\n>>> col, row = 0, 100\n>>> fwd * (col, row)\n(-237481.5, 195036.4)\nThe reverse transformation is obtained using the ~ operator.\n>>> rev = ~fwd\n>>> rev * fwd * (col, row)\n(0.0, 99.99999999999999)\n\n\n", "description": "Matrices for 2D affine transformations"}, {"name": "absl-py", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAbseil Python Common Libraries\nFeatures\nGetting Started\nInstallation\nRunning Tests\nExample Code\nDocumentation\nFuture Releases\nLicense\n\n\n\n\n\nREADME.md\n\n\n\n\nAbseil Python Common Libraries\nThis repository is a collection of Python library code for building Python\napplications. The code is collected from Google's own Python code base, and has\nbeen extensively tested and used in production.\nFeatures\n\nSimple application startup\nDistributed commandline flags system\nCustom logging module with additional features\nTesting utilities\n\nGetting Started\nInstallation\nTo install the package, simply run:\npip install absl-py\nOr install from source:\npython setup.py install\nRunning Tests\nTo run Abseil tests, you can clone the git repo and run\nbazel:\ngit clone https://github.com/abseil/abseil-py.git\ncd abseil-py\nbazel test absl/...\nExample Code\nPlease refer to\nsmoke_tests/sample_app.py\nas an example to get started.\nDocumentation\nSee the Abseil Python Developer Guide.\nFuture Releases\nThe current repository includes an initial set of libraries for early adoption.\nMore components and interoperability with Abseil C++ Common Libraries\nwill come in future releases.\nLicense\nThe Abseil Python library is licensed under the terms of the Apache\nlicense. See LICENSE for more information.\n\n\n"}, {"name": "wheel", "readme": "\nThis library is the reference implementation of the Python wheel packaging\nstandard, as defined in PEP 427.\nIt has two different roles:\n\nA setuptools extension for building wheels that provides the\nbdist_wheel setuptools command\nA command line tool for working with wheel files\n\nIt should be noted that wheel is not intended to be used as a library, and\nas such there is no stable, public API.\n\nDocumentation\nThe documentation can be found on Read The Docs.\n\n\nCode of Conduct\nEveryone interacting in the wheel project\u2019s codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n", "description": "Reference implementation of the Python wheel packaging"}, {"name": "urllib3", "readme": "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nurllib3 is a powerful, user-friendly HTTP client for Python. Much of the\nPython ecosystem already uses urllib3 and you should too.\nurllib3 brings many critical features that are missing from the Python\nstandard libraries:\n\nThread safety.\nConnection pooling.\nClient-side SSL/TLS verification.\nFile uploads with multipart encoding.\nHelpers for retrying requests and dealing with HTTP redirects.\nSupport for gzip, deflate, brotli, and zstd encoding.\nProxy support for HTTP and SOCKS.\n100% test coverage.\n\nurllib3 is powerful and easy to use:\n>>> import urllib3\n>>> resp = urllib3.request(\"GET\", \"http://httpbin.org/robots.txt\")\n>>> resp.status\n200\n>>> resp.data\nb\"User-agent: *\\nDisallow: /deny\\n\"\n\nInstalling\nurllib3 can be installed with pip:\n$ python -m pip install urllib3\n\nAlternatively, you can grab the latest source code from GitHub:\n$ git clone https://github.com/urllib3/urllib3.git\n$ cd urllib3\n$ pip install .\n\nDocumentation\nurllib3 has usage and reference documentation at urllib3.readthedocs.io.\nCommunity\nurllib3 has a community Discord channel for asking questions and\ncollaborating with other contributors. Drop by and say hello \ud83d\udc4b\nContributing\nurllib3 happily accepts contributions. Please see our\ncontributing documentation\nfor some tips on getting started.\nSecurity Disclosures\nTo report a security vulnerability, please use the\nTidelift security contact.\nTidelift will coordinate the fix and disclosure with maintainers.\nMaintainers\n\n@sethmlarson (Seth M. Larson)\n@pquentin (Quentin Pradet)\n@theacodes (Thea Flowers)\n@haikuginger (Jess Shapiro)\n@lukasa (Cory Benfield)\n@sigmavirus24 (Ian Stapleton Cordasco)\n@shazow (Andrey Petrov)\n\n\ud83d\udc4b\nSponsorship\nIf your company benefits from this library, please consider sponsoring its\ndevelopment.\nFor Enterprise\nProfessional support for urllib3 is available as part of the Tidelift\nSubscription.  Tidelift gives software development teams a single source for\npurchasing and maintaining their software, with professional grade assurances\nfrom the experts who know it best, while seamlessly integrating with existing\ntools.\n", "description": "HTTP library with thread-safe connection pooling, file post support, sanity friendly, and more."}, {"name": "unattended-upgrades", "readme": ""}, {"name": "six", "readme": "\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\n\n\nSix is a Python 2 and 3 compatibility library.  It provides utility functions\nfor smoothing over the differences between the Python versions with the goal of\nwriting Python code that is compatible on both Python versions.  See the\ndocumentation for more information on what is provided.\nSix supports Python 2.7 and 3.3+.  It is contained in only one Python\nfile, so it can be easily copied into your project. (The copyright and license\nnotice must be retained.)\nOnline documentation is at https://six.readthedocs.io/.\nBugs can be reported to https://github.com/benjaminp/six.  The code can also\nbe found there.\n\n\n", "description": "Python 2 and 3 compatibility library."}, {"name": "setuptools", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nCode of Conduct\nFor Enterprise\n\n\n\n\n\nREADME.rst\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSee the Installation Instructions in the Python Packaging\nUser's Guide for instructions on installing, upgrading, and uninstalling\nSetuptools.\nQuestions and comments should be directed to GitHub Discussions.\nBug reports and especially tested patches may be\nsubmitted directly to the bug tracker.\n\nCode of Conduct\nEveryone interacting in the setuptools project's codebases, issue trackers,\nchat rooms, and fora is expected to follow the\nPSF Code of Conduct.\n\nFor Enterprise\nAvailable as part of the Tidelift Subscription.\nSetuptools and the maintainers of thousands of other packages are working with Tidelift to deliver one enterprise subscription that covers all of the open source you use.\nLearn more.\n\n\n", "description": "Build and distribution tools for packaging Python projects."}, {"name": "requests-unixsocket", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nrequests-unixsocket\nUsage\nExplicit\nImplicit (monkeypatching)\nAbstract namespace sockets\nSee also\n\n\n\n\n\nREADME.rst\n\n\n\n\nrequests-unixsocket\n\n\n\n\nUse requests to talk HTTP via a UNIX domain socket\n\nUsage\n\nExplicit\nYou can use it by instantiating a special Session object:\nimport json\n\nimport requests_unixsocket\n\nsession = requests_unixsocket.Session()\n\nr = session.get('http+unix://%2Fvar%2Frun%2Fdocker.sock/info')\nregistry_config = r.json()['RegistryConfig']\nprint(json.dumps(registry_config, indent=4))\n\nImplicit (monkeypatching)\nMonkeypatching allows you to use the functionality in this module, while making\nminimal changes to your code. Note that in the above example we had to\ninstantiate a special requests_unixsocket.Session object and call the\nget method on that object. Calling requests.get(url) (the easiest way\nto use requests and probably very common), would not work. But we can make it\nwork by doing monkeypatching.\nYou can monkeypatch globally:\nimport requests_unixsocket\n\nrequests_unixsocket.monkeypatch()\n\nr = requests.get('http+unix://%2Fvar%2Frun%2Fdocker.sock/info')\nassert r.status_code == 200\nor you can do it temporarily using a context manager:\nimport requests_unixsocket\n\nwith requests_unixsocket.monkeypatch():\n    r = requests.get('http+unix://%2Fvar%2Frun%2Fdocker.sock/info')\n    assert r.status_code == 200\n\nAbstract namespace sockets\nTo connect to an abstract namespace\nsocket\n(Linux only), prefix the name with a NULL byte (i.e.: 0) - e.g.:\nimport requests_unixsocket\n\nsession = requests_unixsocket.Session()\nres = session.get('http+unix://\\0test_socket/get')\nprint(res.text)\nFor an example program that illustrates this, see\nexamples/abstract_namespace.py in the git repo. Since abstract namespace\nsockets are specific to Linux, the program will only work on Linux.\n\nSee also\n\nhttps://github.com/httpie/httpie-unixsocket - a plugin for HTTPie that allows you to interact with UNIX domain sockets\n\n\n\n"}, {"name": "python-apt", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n"}, {"name": "PyGObject", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n", "description": "Python bindings for GObject based libraries such as GTK."}, {"name": "PyAudio", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n", "description": "Bindings for PortAudio v19, allows playback and recording of audio on a variety of platforms.", "category": "Audio"}, {"name": "pip", "readme": "\n\n\n\n\n\n\n\n\n\n\n\npip - The Python Package Installer\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\npip - The Python Package Installer\n\n\n\npip is the package installer for Python. You can use pip to install packages from the Python Package Index and other indexes.\nPlease take a look at our documentation for how to install and use pip:\n\nInstallation\nUsage\n\nWe release updates regularly, with a new version every 3 months. Find more details in our documentation:\n\nRelease notes\nRelease process\n\nNote: pip 21.0, in January 2021, removed Python 2 support, per pip's Python 2 support policy. Please migrate to Python 3.\nIf you find bugs, need help, or want to talk to the developers, please use our mailing lists or chat rooms:\n\nIssue tracking\nDiscourse channel\nUser IRC\n\nIf you want to get involved head over to GitHub to get the source code, look at our development documentation and feel free to jump on the developer mailing lists and chat rooms:\n\nGitHub page\nDevelopment documentation\nDevelopment IRC\n\n\nCode of Conduct\nEveryone interacting in the pip project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n", "description": "Package installer for Python."}, {"name": "idna", "readme": "\nSupport for the Internationalized Domain Names in\nApplications (IDNA) protocol as specified in RFC 5891. This is the latest version of\nthe protocol and is sometimes referred to as \u201cIDNA 2008\u201d.\nThis library also provides support for Unicode Technical\nStandard 46, Unicode IDNA Compatibility Processing.\nThis acts as a suitable replacement for the \u201cencodings.idna\u201d\nmodule that comes with the Python standard library, but which\nonly supports the older superseded IDNA specification (RFC 3490).\nBasic functions are simply executed:\n>>> import idna\n>>> idna.encode('\u30c9\u30e1\u30a4\u30f3.\u30c6\u30b9\u30c8')\nb'xn--eckwd4c7c.xn--zckzah'\n>>> print(idna.decode('xn--eckwd4c7c.xn--zckzah'))\n\u30c9\u30e1\u30a4\u30f3.\u30c6\u30b9\u30c8\n\nInstallation\nThis package is available for installation from PyPI:\n$ python3 -m pip install idna\n\n\nUsage\nFor typical usage, the encode and decode functions will take a\ndomain name argument and perform a conversion to A-labels or U-labels\nrespectively.\n>>> import idna\n>>> idna.encode('\u30c9\u30e1\u30a4\u30f3.\u30c6\u30b9\u30c8')\nb'xn--eckwd4c7c.xn--zckzah'\n>>> print(idna.decode('xn--eckwd4c7c.xn--zckzah'))\n\u30c9\u30e1\u30a4\u30f3.\u30c6\u30b9\u30c8\nYou may use the codec encoding and decoding methods using the\nidna.codec module:\n>>> import idna.codec\n>>> print('\u0434\u043e\u043c\u0435\u043d.\u0438\u0441\u043f\u044b\u0442\u0430\u043d\u0438\u0435'.encode('idna'))\nb'xn--d1acufc.xn--80akhbyknj4f'\n>>> print(b'xn--d1acufc.xn--80akhbyknj4f'.decode('idna'))\n\u0434\u043e\u043c\u0435\u043d.\u0438\u0441\u043f\u044b\u0442\u0430\u043d\u0438\u0435\nConversions can be applied at a per-label basis using the ulabel or\nalabel functions if necessary:\n>>> idna.alabel('\u6d4b\u8bd5')\nb'xn--0zwm56d'\n\nCompatibility Mapping (UTS #46)\nAs described in RFC 5895, the\nIDNA specification does not normalize input from different potential\nways a user may input a domain name. This functionality, known as\na \u201cmapping\u201d, is considered by the specification to be a local\nuser-interface issue distinct from IDNA conversion functionality.\nThis library provides one such mapping, that was developed by the\nUnicode Consortium. Known as Unicode IDNA Compatibility Processing, it provides for both a regular\nmapping for typical applications, as well as a transitional mapping to\nhelp migrate from older IDNA 2003 applications.\nFor example, \u201cK\u00f6nigsg\u00e4\u00dfchen\u201d is not a permissible label as LATIN\nCAPITAL LETTER K is not allowed (nor are capital letters in general).\nUTS 46 will convert this into lower case prior to applying the IDNA\nconversion.\n>>> import idna\n>>> idna.encode('K\u00f6nigsg\u00e4\u00dfchen')\n...\nidna.core.InvalidCodepoint: Codepoint U+004B at position 1 of 'K\u00f6nigsg\u00e4\u00dfchen' not allowed\n>>> idna.encode('K\u00f6nigsg\u00e4\u00dfchen', uts46=True)\nb'xn--knigsgchen-b4a3dun'\n>>> print(idna.decode('xn--knigsgchen-b4a3dun'))\nk\u00f6nigsg\u00e4\u00dfchen\nTransitional processing provides conversions to help transition from\nthe older 2003 standard to the current standard. For example, in the\noriginal IDNA specification, the LATIN SMALL LETTER SHARP S (\u00df) was\nconverted into two LATIN SMALL LETTER S (ss), whereas in the current\nIDNA specification this conversion is not performed.\n>>> idna.encode('K\u00f6nigsg\u00e4\u00dfchen', uts46=True, transitional=True)\n'xn--knigsgsschen-lcb0w'\nImplementors should use transitional processing with caution, only in\nrare cases where conversion from legacy labels to current labels must be\nperformed (i.e. IDNA implementations that pre-date 2008). For typical\napplications that just need to convert labels, transitional processing\nis unlikely to be beneficial and could produce unexpected incompatible\nresults.\n\n\nencodings.idna Compatibility\nFunction calls from the Python built-in encodings.idna module are\nmapped to their IDNA 2008 equivalents using the idna.compat module.\nSimply substitute the import clause in your code to refer to the new\nmodule name.\n\n\n\nExceptions\nAll errors raised during the conversion following the specification\nshould raise an exception derived from the idna.IDNAError base\nclass.\nMore specific exceptions that may be generated as idna.IDNABidiError\nwhen the error reflects an illegal combination of left-to-right and\nright-to-left characters in a label; idna.InvalidCodepoint when\na specific codepoint is an illegal character in an IDN label (i.e.\nINVALID); and idna.InvalidCodepointContext when the codepoint is\nillegal based on its positional context (i.e. it is CONTEXTO or CONTEXTJ\nbut the contextual requirements are not satisfied.)\n\n\nBuilding and Diagnostics\nThe IDNA and UTS 46 functionality relies upon pre-calculated lookup\ntables for performance. These tables are derived from computing against\neligibility criteria in the respective standards. These tables are\ncomputed using the command-line script tools/idna-data.\nThis tool will fetch relevant codepoint data from the Unicode repository\nand perform the required calculations to identify eligibility. There are\nthree main modes:\n\nidna-data make-libdata. Generates idnadata.py and\nuts46data.py, the pre-calculated lookup tables using for IDNA and\nUTS 46 conversions. Implementors who wish to track this library against\na different Unicode version may use this tool to manually generate a\ndifferent version of the idnadata.py and uts46data.py files.\nidna-data make-table. Generate a table of the IDNA disposition\n(e.g. PVALID, CONTEXTJ, CONTEXTO) in the format found in Appendix\nB.1 of RFC 5892 and the pre-computed tables published by IANA.\nidna-data U+0061. Prints debugging output on the various\nproperties associated with an individual Unicode codepoint (in this\ncase, U+0061), that are used to assess the IDNA and UTS 46 status of a\ncodepoint. This is helpful in debugging or analysis.\n\nThe tool accepts a number of arguments, described using idna-data -h. Most notably, the --version argument allows the specification\nof the version of Unicode to use in computing the table data. For\nexample, idna-data --version 9.0.0 make-libdata will generate\nlibrary data against Unicode 9.0.0.\n\n\nAdditional Notes\n\nPackages. The latest tagged release version is published in the\nPython Package Index.\nVersion support. This library supports Python 3.5 and higher.\nAs this library serves as a low-level toolkit for a variety of\napplications, many of which strive for broad compatibility with older\nPython versions, there is no rush to remove older intepreter support.\nRemoving support for older versions should be well justified in that the\nmaintenance burden has become too high.\nPython 2. Python 2 is supported by version 2.x of this library.\nWhile active development of the version 2.x series has ended, notable\nissues being corrected may be backported to 2.x. Use \u201cidna<3\u201d in your\nrequirements file if you need this library for a Python 2 application.\nTesting. The library has a test suite based on each rule of the\nIDNA specification, as well as tests that are provided as part of the\nUnicode Technical Standard 46, Unicode IDNA Compatibility Processing.\nEmoji. It is an occasional request to support emoji domains in\nthis library. Encoding of symbols like emoji is expressly prohibited by\nthe technical standard IDNA 2008 and emoji domains are broadly phased\nout across the domain industry due to associated security risks. For\nnow, applications that wish need to support these non-compliant labels\nmay wish to consider trying the encode/decode operation in this library\nfirst, and then falling back to using encodings.idna. See the Github\nproject for more discussion.\n\n\n", "description": "Implements IDNA2008 internationalized domain names in applications."}, {"name": "distro-info", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n"}, {"name": "dbus-python", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nWarehouse\nGetting Started\nDiscussion\nTesting\nCode of Conduct\n\n\n\n\n\nREADME.rst\n\n\n\n\nWarehouse\nWarehouse is the software that powers PyPI.\nSee our development roadmap, documentation, and\narchitectural overview.\n\nGetting Started\nYou can run Warehouse locally in a development environment using\ndocker. See Getting started\ndocumentation for instructions on how to set it up.\nThe canonical deployment of Warehouse is in production at pypi.org.\n\nDiscussion\nYou can find help or get involved on:\n\nGithub issue tracker for reporting issues\nIRC: on Libera, channel #pypa for general packaging discussion\nand user support, and #pypa-dev for\ndiscussions about development of packaging tools\nThe PyPA Discord for live discussions\nThe Packaging category on Discourse for discussing\nnew ideas and community initiatives\n\n\nTesting\nRead the running tests and linters section of our documentation to\nlearn how to test your code.  For cross-browser testing, we use an\nopen source account from BrowserStack. If your pull request makes\nany change to the user interface, it will need to be tested to confirm\nit works in our supported browsers.\n\n\nCode of Conduct\nEveryone interacting in the Warehouse project's codebases, issue trackers, chat\nrooms, and mailing lists is expected to follow the PSF Code of Conduct.\n\n\n"}, {"name": "certifi", "readme": "\n\n\n\n\n\n\n\n\n\n\n\nCertifi: Python SSL Certificates\nInstallation\nUsage\nAddition/Removal of Certificates\n\n\n\n\n\nREADME.rst\n\n\n\n\nCertifi: Python SSL Certificates\nCertifi provides Mozilla's carefully curated collection of Root Certificates for\nvalidating the trustworthiness of SSL certificates while verifying the identity\nof TLS hosts. It has been extracted from the Requests project.\n\nInstallation\ncertifi is available on PyPI. Simply install it with pip:\n$ pip install certifi\n\n\nUsage\nTo reference the installed certificate authority (CA) bundle, you can use the\nbuilt-in function:\n>>> import certifi\n\n>>> certifi.where()\n'/usr/local/lib/python3.7/site-packages/certifi/cacert.pem'\n\nOr from the command line:\n$ python -m certifi\n/usr/local/lib/python3.7/site-packages/certifi/cacert.pem\n\nEnjoy!\n\nAddition/Removal of Certificates\nCertifi does not support any addition/removal or other modification of the\nCA trust store content. This project is intended to provide a reliable and\nhighly portable root of trust to python deployments. Look to upstream projects\nfor methods to use alternate trust.\n\n\n", "description": "Provides Mozilla's CA bundle."}]