MAIN INFO Ambrosinus-Toolkit latest version is: 1.2.4 - CHANGELOG v1.2.4 -1 I have fixed different bugs that come up after some A1111 API updates. The issue generally appeared with this error message: "Index was outside the bounds of the array". it was related to the info text embedded in the generated image. -2 Added an internal timeout in the "SDopts_loc" component (that one to select different checkpoint models); -3 Fixed an issue related to the possibility of generating more than one image (N params); -4 Fixed different issues about the INFOtext generation if the user switches among T2I Base, CN v1.0 and CN v1.1.X processes; -5 Fixed the possibility to assign CFG decimal values (e.g.:5.5, 5.8, etc.); -6 Reduced the number of times that the CN v1.1.X alert message turns on; v1.2.3 -1 I have fixed a bug that comes up after some A1111 API updates. The issue generally appeared with this error message: "Index was outside the bounds of the array". it was related to the info text embedded in the generated image. v1.2.2 -1 "LaunchSD_loc" has been updated: now the input parameters give the progression sense of the sequence of the steps. Now you can check the IP address through the integrated ipconfig command helpful when the user ticks the listen argument; -2 "CustIPport_loc" component has been added: this component helps the user in setting the Remote PC configuration passing the IP address shared on his network and the port number opened on the Windows OS Firewall; -3 "AIeNG_loc" component: its new look added the implementation of the ControlNET version 1.1.X (bear in mind it is implemented, but not yet fully API supported so I suggest using v1.0) so you can test it but it is already known that the dataset-models do not work properly in the API mode; -4 Thanks to WinSDlauncher (the Windows OS app) is it possible to set the Server PC without switching/fetching the Rhino license between the Server PC and the Remote one (please see the video demo); v1.2.1 -1 "LaunchSD_loc" has been improved, now it is possible to interact with webui-user.bat file and some of its most used arguments. It can create webui-user.bat file according to cmd arguments; -2 "ViewCapture" component has been added to the 2.Image sub-category. This component allows the user to save the Rhino viewport as an image file (JPG/PNG) with a scale factor and as a Rhino Named Views object. Now you can pass easily your 3D model screenshot as BaseIMH input in the AIeng_loc component to render your concept design; -3 "UpsclAI_loc" component has been added to the 3.AI sub-category. This upscaler can improve the output size of any image format leveraging AI models working with the A1111 project; -4 "OpenDir" component has been added to the 4. Manage sub-category. It is still in WIP mode but basically, it can open quickly any full folder path passed as input; -5 "AIeng_loc" component has been updated. Currently, it works fine with ControlNET v1.0 extension (see the video tutorial on how to install it) but it is ready for ControlNET v1.1, more info in the next update; -6 "Extra components" have been re-ordered in the 9.Extra sub-category. Please, delete old ones (to avoid an overlapped effect) and download new versions from the GitHub page; v1.2.0 -1 "SDopts_loc" has been added to the AI sub-category. This component allows the user to set a custom Stable Diffusion model checkpoint, for instance, using the “mdjrny-v4.safetensors” model, a dataset trained with Midjourney version 4 images; -2 "SD-Imginfo" allows the user to read all AI-Gen settings used for image generation; -3 All C# components have been updated with a new right-click context menu to get more info; -4 AI subcategory components disposition has been reordered; v1.1.9 -1 "AIeNG_loc" has been added to the AI sub-category. This component lays the foundations for future adaptations, expansions and tools to make the most of the best features provided by the AUTOMATIC1111 project - It is part of the "_loc" components able to run Stable Diffusion and ControlNET neural network locally; -2 "LaunchSD_loc" has been added to the AI sub-category. This component can open and close the localhost port and also check the status - It is part of the "_loc" components able to run Stable Diffusion and ControlNET neural network locally (thanks to the Automatic1111 project); -3 "LA_DPTto3D_b101" has been added to the AI sub-category (downloadable from the GitHub page). This component can generate a PointCloud (stored in the PLY file format) directly from a 2D RGB image. It exploits the DPT technology (transformers libraries). Right-click on the component for more info; v1.1.8 -1 "FileNamer" component has been fixed; these updates (v1.1.7/v1.1.8) were necessary due to a couple of incoming research projects; -2 "La_GrayGaussMask" has been updated. Some minor fixes; -3 "LA_OpenAI-GH_Ask_b107" has been updated. Some minor fixes; -4 "LA_OpenAI-GHadv_b112" has been updated. Some minor fixes; -5 "LA_StabilityAI-GH_b108" has been updated. Some minor fixes; v1.1.7 -1 "SeeOut" component has been fixed to avoid a loop-action if its output is passed as input in OpenAI or StabilityAI component. Now it is not necessary to rename the Slider Number with the name "SeqID"; v1.1.6 -1 Ambrosinus-Toolkit updated with the "SeeOut" component in the Image sub-category. It can slide the image file path Output by a "SeqID" slider or just see the latest file path generated from OpenAI or StabilityAI GH component; -2 Minor fix for LA_StabilityAI-GH component, now at Build-107, see the GitHub page to download it. If you already downloaded this build on 2023/02/06, please do it again. -3 Minor fix for LA_OpenAI-GH component, now at Build-111; see the GitHub page to download it; -4 Minor fix for LA_OpenAI_Ask-GH component, now at Build-106; see the GitHub page to download it; v1.1.5 -1 Ambrosinus-Toolkit updated with "FileNamer" component. It can generate a filename to avoid overwriting action; -2 New version of the "SdINinfo" component. Now able to show different info from StabilityAI image output like BaseIMG and MaskIMG info/links; -3 "LA_StabilityAI-GH" now is advanced, which means that is possible to run: TXT2IMG, IMG2IMG and IMG2IMG Masking features. I added CLIP guidance too in order to drive the image generation process; -4 All images will be stored in IMGs folder. All metadata will be stored in TXTs folder. Each run will be tracked in a unique CSV file. Helpful for data storytelling; v1.1.4 -1 "LA_StabilityAI-GH" build-102 has been updated. After the StabilityAI updates of the "stability-sdk" Python library to v0.3.0, this component you can now select different Engine (Stable Diffusion v1, v1.5, v2.0 and v2.1) and among 10 different samplers; I have modified the filename layout for integrating "SD-INinfo" actions; -2 "SD-INinfo" component has been developed and added. This component allows the user to grab settings info from the filename of the image generated by LA_StabilityAI-GH component; -3 fixed minor bugs and inaccuracies in descriptions and names; v1.1.3 -1 "AnsToPrompt" has been added. This component converts the AskToOpenAI answer into a text prompt; -2 "AskToOpenAI" component has been developed and added (see GitHub page to install it) to play with OpenAI completion mode, the OpenAI GPT-3 model transforms this component into a super smart "chatBot". I asked for design tips but also for generating a simple piece of code (Python, C# etc.); v1.1.2 -1 Subcategory Layout has changed; -1 "DALLEfromGH" UI updated with "Light version" info; -2 "LA_OpenAI_GHadv" has replaced the CPython "Light version". Now you can experiment with the EDIT and VARIATION modes. You can install it via the GitHub repo; -3 Subcategory Image has been added; -4 "ImageConv" can read image file info and can convert images to these formats: Bmp, Emf, Exif, Gif, Icon, Jpeg, MemoryBmp, Png, Tiff, Wmf; -5 "ImageMask" can generate PNG and JPG image masks by simply drawing them inside Rhino over the BaseIMG; v1.1.1 -1 "DALLEfromGH" has been added to the toolkit. In order to run it the Toolkit uses the Json DLL library provided in the installation ZIP file. Now you can explore OpenAI (DALL-E) prompt-to-image requesting process directly inside GH. No Python libraries need to be installed to run OpenAI. It has replaced the LA_OpenAI-GH.ghuser component (anyway it will always be available on GitHub and you can always install it by following the guide. v1.1.0 -1 "GradientGen" has been replaced by Ambrosinus-Toolkit project; -2 "ToolkitVersion" component - Now the toolkit can show the version and the main changelog (main updates) and above all notify user if he needs to install the latest version; -3 "HEXtoRGB" now can be covert also from RGB to HEX values with lowercase and hashtag options; -4 "KelvinToRGB" converts a Kelvin temperature to RGB value (for specific values shows some extra info about devices that emit the same temperature); -5 "WavelengthToRGB" converts a Wavelength in a visible light range (380nm-780nm) to RGB value; v1.0.0 -1 "GradientGen and Utilities", an internal module part of Ambrosinus-Toolkit project;