{ "lang": "en", "data": { "FORM": { "addSecretObjects": { "Name": "Kubernetes secret name. It should only contain alphanumeric characters and '-'.\nSelect the parameters which you want to include into the secrets. You can also change the name of the parameter.\n" }, "AddEFS": { "Name": "Friendly name for Elastic File System. User provided name will be appended with prefix as \"duploservice--\".", "CreationToken": "A string of up to 64 ASCII characters. Amazon EFS uses this to ensure idempotent creation.", "ProvisionedThroughputInMibps": "The throughput, measured in MiB/s, that you want to provision for a file system that you're creating. \nValid values are 1-1024. Required if ThroughputMode is set to provisioned . The upper limit for throughput is 1024 MiB/s.\n", "performanceMode": "The performance mode of the file system. We recommend generalPurpose performance mode for most file systems. \nFile systems using the maxIO performance mode can scale to higher levels of aggregate throughput and operations per second with a tradeoff of slightly higher latencies for most file operations. The performance mode can't be changed after the file system has been created.\nNote: The maxIO mode is not supported on file systems using One Zone storage classes.\n", "throughputMode": "Specifies the throughput mode for the file system, either bursting or provisioned . If you set ThroughputMode to provisioned , you must also set a value for ProvisionedThroughputInMibps. \nAfter you create the file system, you can decrease your file system's throughput in Provisioned Throughput mode or change between the throughput modes, as long as it\u2019s been more than 24 hours since the last decrease or throughput mode change.\n", "Backup": "Specifies whether automatic backups are enabled on the file system that you are creating. Set the value to true to enable automatic backups. \nIf you are creating a file system that uses One Zone storage classes, automatic backups are enabled by default.\n", "Encrypted": "A Boolean value that, if true, creates an encrypted file system. When creating an encrypted file system, you have the option of specifying an existing Key Management Service key (KMS key)." }, "bucket": { "bucketName": "Specify the name of the bucket.", "EnableVersioning": "Select to enable bucket's Versioning configuration.", "AllowPublicAccess": "Select to enable public access to a bucket.", "Labels": "Specify key/value label pairs to assign to the bucket.", "MultiRegion": "(Optional) Multi-Region for availability across largest area. If Location Type input not provided, by default Bucket will be created in `us (Multiple Regions in United States)`.", "Region": "(Optional) Select Single Region. Location Type cannot be edited, after bucket creation.", "RegionOptions": "Select Region. Location Type cannot be edited, after bucket creation.", "MultiRegionOptions": "Select Region. If not provided Bucket will be created in `us (Multiple Regions in United States)`. Location Type cannot be edited, after bucket creation." }, "toggleIAMAuth": { "EnableIamAuth": "Enable to set IAM database authentication. For supported regions and engine versions, refer [here](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RDS_Fea_Regions_DB-eng.Feature.IamDatabaseAuthentication.html)" }, "job": { "Name": "Enter the name of the job.", "TargetType": "Select from the supported Target Types.", "Schedule": "Specify schedule in Cron format. Example- `0 0 * * 0`. This will run once a week at midnight on Sunday morning. For more help on cron schedule expressions, click [here](https://crontab.guru/#0_0_*_*_0).", "Description": "Specify description for the job.", "Timezone": "Select the time zone to be used in interpreting schedule.", "PubSubTargetTopicName": "Select Pub/Sub topic to which messages will be published when a job is delivered.", "PubSubTargetAttributes": "Enter the attributes for PubsubMessage.\nExample:\n```js\n{\n\"source\": \"cloud-scheduler\",\n\"type\": \"backup\"\n}\n```\n", "PubSubTargetData": "Enter the message body.\nExample:\n```js\n{\n\"task\": \"daily-report\",\n\"status\": \"pending\"\n}\n```\n", "AppEngineTargetAppEngineRoutingService": "Specify the App service name.", "AppEngineTargetAppEngineRoutingVersion": "Specify the App version.", "HttpTargetHttpMethod": "Select HTTP method.", "AppEngineTargetRelativeUri": "Specify the relative URL, must begin with \"/\" and must be a valid HTTP relative URL.", "HttpTargetHeaders": "Specify HTTP request headers.\n```js\n{\n\"Content-Type\": \"application/json\"\n}\n```\n", "CloudFunctionName": "Select Cloud Function. Select Other to manually enter the Target URI.", "HttpTargetUri": "Enter the full URI path that the request will be sent to.", "HttpTargetAuth": "Select authentication.", "HttpTargetBody": "Enter the message body.\nExample:\n```js\n{\n\"task\": \"daily-report\",\n\"status\": \"pending\"\n}\n```" }, "AddFunctionApp": { "hostingPlan": "Select the hosting plan for the Function App (e.g., 'Consumption (Serverless)' or 'Functions Premium').", "Name": "Enter the name for your Function App.", "selectedOperatingSystem": "Select the operating system for the Function App.", "storageAccountName": "Choose the storage account to associate with the Function App.", "runtimeStack": "Select the runtime stack for the Function App.", "version": "Select the runtime stack version.", "appSettings": "Enter application settings in JSON format.", "connectionStrings": "Enter connection strings in JSON format." }, "createSnapshot": { "SnapshotName": "Specify the name for snapshot." }, "AddEC2Host": { "Name": "Friendly name for host. User provided name will be appended by prefix as \"duploservice--\".", "Zone": "AWS Availability Zone to create an EC2 instance.\n**Automatic:** Select this option to automatically assign the availability zone to EC2 host. DuploCloud will automatically assign a zone with subnet having most available IPv4 addresses.\n", "InstanceType": "Select an instance type that meets your computing, memory, networking, or storage needs. Select type as \"Other\" if you don't see desired option in the dropdown.", "InstanceCount": "Desired capacity for the autoscaling group.", "minInstanceCount": "Minimum Instance Count. Autoscaling group will make sure that total no of instance will always be greater than or equal to min count.", "maxInstanceCount": "Maximum Instance Count. Autoscaling group will make sure that total no of instance will always be less than or equal to max count.", "IsClusterAutoscaled": "Check this when you want kubernetes cluster autoscaler to manage this cluster auto scaling.", "allocationTags": "Allocation tags is the simplest way to constraint containers/pods with hosts/nodes. DuploCloud/Kubernetes Orchestrator will make sure containers will run on the hosts having same allocation tags.", "diskSize": "EBS volume size in GB. If not specified volume size will be same as defined the AMI.", "agentPlatform": "Select container orchestration platform.\n1. **Linux Docker/Native:** Select this option if you want to run docker native services which are Linux based.\n2. **Windows Docker/Native:** Select this option if you want to run docker native services which are Windows based.\n3. **EKS Linux:** Select this options if you want to run services on the Kubernetes Cluster.\n4. **None:** This option has to be selected when EC2 instance is not used for running containers.\n5. **ECS:** This option is available only for the Infrastructure with ECS enabled and in ASG form, to be used while configuring Capacity Provider for ECS Service.\n", "ImageId": "AMI id for the EC2 instance. AMI should be compatible with the agent platform. Select type as \"Other\" if you don't see desired option in dropdown.", "blockEBSOptimization": "Set this to enable block EBS optimization.", "enableHibernation": "Hibernation stops your instance and saves the contents of the instance\u2019s RAM to the root volume. You cannot enable hibernation after EC2 host is launch.", "metaDataServiceFlag": "Select `Disabled` to turn off access to instance metadata. Otherwise you can set `V1 and V2`, or just `V2`. If you do not specify a value, the default is V2 only.", "base64": "Base64 encoded user data. On Linux machine you can encode script file using command ```cat | base64 -w 0 ```.", "tags": "Tags to be added to ec2 instance. Format for adding tags is as below.\n 1. **EKS Linux:** Platform\n```js\n{\n \"key\" : \"value\"\n}\n```\n2. **Linux Docker/Native:** Platform\n`\"key=value, key1=value1\"`\n", "volumes": "Array of extra block devices in json format as below.\n```js\n[\n {\n \"Name\":\"/dev/sda1\", \n \"VolumeType\":\"gp2\", \n \"Size\":\"100\",\n \"DeleteOnTermination\": \"true\"\n }\n]\n```\n", "nwInterfaces": "Extra network interfaces to be attached to the ec2 host in a JSON format as below.\n```js\n[\n {\n \"NetworkInterfaceId\": \"eni-095827b411091db43\",\n \"DeviceIndex\": 0\n },\n {\n \"NetworkInterfaceId\": \"eni-0df26c4b283cde675\",\n \"DeviceIndex\": 1\n }\n] \n```\n", "DedicatedHostId": "Specify the Dedicated Host ID. This ID is used to launch an instance onto specified host. Example- `h-0c6ab6f38bdcb24f6`.", "useSpotInstancesCheck1": "Enable to launch hosts using Spot Instances.", "maximumSpotPrice": "(Optional) If not specified, Default price would be referred.\nIf user wants to set, specify price in dollars, example- `0.0245`. Refer Spot Instance pricing [here](https://aws.amazon.com/ec2/spot/pricing/).\n", "canScaleFromZero": "Enable the Scale From Zero (BETA) feature so DuploCloud can scale up the initial host in the ASG whenever it detects that an otherwise unschedulable pod would be able to run on such a host.", "extraNodeLabels": "Specify additional Node Labels.\n ```js\n {\n \"team\" : \"backend\",\n \"workload\" : \"web\"\n }\n ```\n", "keyPairType": "Select Key Pair. Supports ED25519 and RSA key pair types. Defaults to `ed25519`.", "enabledMetrics": "Select to enable Auto Scaling group metrics collection.", "Zones": "Select Availability Zones.", "enableDiskEncryption": "Select to enable encryption at rest for EBS volumes.", "EncryptionKey": "Select KMS keys for disk encryption." }, "datafactory": { "Name": "Specify the name of the Data Factory. Data Factory names must be globally unique.", "PublicEndpoint": "Enable/Disable for Data Factory visibility to the public network." }, "S3bucketNotification": { "DestinationType": "Select from the supported destination types.", "Destination": "Sends event messages to the selected type.", "EventTypes": "Select specific bucket-level events that can trigger a notification or an EventBridge event. Multiple Selection allowed." }, "AddRdsReplica": { "Identifier": "Please provide a unique identifier for the RDS replica instance that is unique across all tenants. The cluster identifier is used to determine the cluster's endpoint. An identifier cannot end with a hyphen or contain two consecutive hyphens or start with a number. It should also be 49 characters or shorter and must be in all lowercase.", "Engine": "Select Database engine for creating RDS instance.", "EngineVersion": "Select database engine version. If not selected latest version will be used while creating database. Select type as 'Other' if you don't see desired option in dropdown list.", "Size": "Instance size for RDS. Select type as 'Other' if you don't see desired option in dropdown list.", "AvailabilityZone": "Select availability zone for high availability." }, "cwrule-add": { "Rule Name": "The name of the rule.", "Description": "The description of the rule.", "ScheduleExpression": "The scheduling expression. Use Rate Expressions. For example, `rate(5 minutes)` or `rate(1 day)`. Refer [here](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html#RateExpressions).", "State": "Select the state of the rule." }, "AzureAvailabilitySet": { "name": "Enter a unique name for the Availability Set. Must be 3-61 characters, lowercase letters, numbers, and hyphens only.", "sku": "Select whether to use managed disks for the Availability Set. Options include 'Classic' and 'Aligned'.", "platformFaultDomainCount": "Specify the number of fault domains for the Availability Set. Valid values are 1 to 3.", "platformUpdateDomainCount": "Specify the number of update domains for the Availability Set. Valid values are 1 to 20.", "Name": "Enter a unique name for the Availability Set. Must be 3-61 characters, lowercase letters, numbers, and hyphens only.", "Sku": "Select whether to use managed disks for the Availability Set. Options include 'Classic' and 'Aligned'.", "PlatformFaultDomainCount": "Specify the number of fault domains (1-3). Fault domains isolate your VMs to reduce impact from hardware failures.", "PlatformUpdateDomainCount": "Specify the number of update domains (1-20). Update domains ensure VMs are rebooted in batches during maintenance." }, "AddAwsSecrets": { "Name": "The name of the new secret.", "SecretValueType": "Secret type.\n1. **JSON Key/Value pairs:** Provide your secret information, such as credentials and connection details, as key/value pairs.\n2. **Plain text:** You can choose the `Plain text` option to store your secret in plaintext string.\n", "KeyValsObject": "Key Value pair\n```yaml\n{\n \"username\": \"USER\",\n \"password\": \"EXAMPLE-PASSWORD\"\n}\n```", "Value": "Plain Text String." }, "AddRDS": { "Identifier": "Please provide a unique identifier for the RDS instance that is unique across all tenants. The cluster identifier is used to determine the cluster's endpoint. An identifier cannot end with a hyphen or contain two consecutive hyphens or start with a number. It should also be 49 characters or shorter and must be in all lowercase.", "SnapshotId": "Select this when you want to create RDS instance from existing Snapshot.", "Engine": "Select Database engine for creating RDS instance.", "EngineVersion": "Select database engine version. If not selected latest version will be used while creating database. Select type as 'Other' if you don't see desired option in dropdown list.", "Username": "Specify an alphanumeric string that defines the login ID for the master user. You use the master user login to start defining all users, objects, and permissions in a databases of your DB instance. Master Username must start with a letter.", "Password": "Specify a string that defines the password for the master user. Master Password must be at least eight characters long and listed characters are accepted ```[a-z] [A-Z] [0-9] [- * ! $ % &]```.", "ClusterIdentifier": "Cluster Identifier", "DbSize": "Instance size for RDS. Select type as 'Other' if you don't see desired option in dropdown list.", "AllocatedStorage": "Storage allocation for RDS instance in GB.", "MinCapacity": "Set the minimum capacity unit for the DB cluster. Each capacity unit is equivalent to a specific compute and memory configuration.", "MaxCapacity": "Set the maximum capacity unit for the DB cluster. Each capacity unit is equivalent to a specific compute and memory configuration.", "AutoPause": "Specify the amount of time to pass with no database traffic before you scale to zero processing capacity. When database traffic resumes, your Serverless cluster resumes processing capacity and scales to handle the traffic.\\", "AutoPauseDuration": "Amount of time the cluster can be idle before scaling to zero", "DBParameterGroupName": "Database parameters group name.", "ClusterParameterGroupName": "Cluster parameters group name.", "EncryptionKey": "Choose to encrypt the given instance.", "EnableLogging": "Select this option to enable logging for the RDS instance.", "MultiAZ": "Create database in multiple availability zones for high availability.", "StorageType": "Select the StorageType. Default is `gp3`.", "StoreDetailsInSecretManager": "Enable to store RDS password in AWS Secret Manager.", "EnableIamAuth": "Enable to set IAM database authentication. For supported regions and engine versions, refer [here](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RDS_Fea_Regions_DB-eng.Feature.IamDatabaseAuthentication.html)", "BackupRetentionPeriod": "Specify in days for automated backups. Valid values 1-35. (Optional)If not specified, by default Backup Retention Day would be set as 1.", "AvailabilityZone": "Select an Availability Zone (AZ).", "CACertificateIdentifier": "Select Certificate authority.", "EnableMultiAZ": "Enable Multi Availability Zone.", "DatabaseName": "Specify a name for your Database.", "EnablePerformanceInsights": "Enable Performance Insights.", "PerformanceInsightsRetentionPeriod": "Specify the amount of time in days to retain Performance Insights data. Defaults to 7 Days.", "PerformanceInsightsEncryptionKey": "Choose to encrypt Performance Insights data." }, "cwtarget-ecs-add": { "Task Definition Family": "Specify the task definition to use if the event target is ECS.", "Task Version": "Select the task version.", "Task Count": "Specify the number of tasks to create based on the TaskDefinition. Minimum value 1, Maximum value 10." }, "ecsservice": { "Name": "The name of your service. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed.", "vcpus": "Task definition with revision.", "Replicas": "ECS makes sure a specified number of replicas run across your cluster.", "DnsPrfx": "Prefix which will get added to the base domain registered in the plan for this tenant. If not specified default value will be `-`", "HealthCheckGracePeriodSeconds": "The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started. \nThis is only used when your service is configured to use a load balancer. If your service has a load balancer defined and you don't specify a health check grace period value, the default value of 0 is used.\n", "OldTaskDefinitionBufferSize": "Old task definitions buffer size. This can be used to limit the size the stale task definitions in buffer. Default value is 10.", "CapacityProvider": "Add a capacity provider to the custom capacity provider strategy for the cluster. Add a capacity provider to the custom capacity provider strategy for the cluster. If there are no existing capacity providers, create a new capacity provider from the capacity providers tab on the cluster details screen." }, "addTaintForm": { "key": "Specify the Key for taint. Example- `taint-key`.", "value": "Specify the Value for taint. Example- `taint-value`.", "effect": "Select the taint effect." }, "lambda-add-layer": { "SelectLayer": "Select the Layer. The list shows the layers compatible to the runtime of the function. Applicable for only `.zip` package type.", "SelectVersion": "Select the version of the layer to use in the function." }, "addEmrStudioSidebar": { "Name": "EMR Studio.", "Description": "A detailed description of the Amazon EMR Studio.", "S3Bucket": "Select S3 for backup.", "S3Folder": "Specify folder name in S3." }, "EksNodepoolForm": { "name": "Specify name of the node pool.", "expireAfter": "Speciy in hours. This configuration means the node pool will expire and be deleted after the specified hours.", "terminationGracePeriod": "Specify Termination Grace Period in hours.\nTime given to node for pod draining before force deleting pods and removing the node.\nIn EKS Auto Mode this is implicitly set to 24 hours when omitted. See [here](https://docs.aws.amazon.com/eks/latest/userguide/create-node-pool.html#_termination_grace_period).\n", "cpu": "Specify CPU limit.\nCeiling on how many cpu resources a nodepool can cumulatively consume by scaling nodes.\nWhen defined, nodepool stops scaling if it takes cpu consumption beyond cpu limit.\nOmitting limit means not specifying an upper bound for cpu consumption. For details, see [here](https://karpenter.sh/docs/concepts/nodepools/#speclimits)\n", "memory": "Specify Memory limit.\nCeiling on how much memory a nodepool can cumulatively consume by scaling nodes.\nWhen defined, nodepool stops scaling if it takes memory consumption beyond this limit.\nOmitting limit means not specifying an upper bound for memory consumption. For details, see [here](https://karpenter.sh/docs/concepts/nodepools/#speclimits)\n", "consolidationPolicy": "Select to determine how EKS handles potential disruptions to running workloads when performing node consolidation.\n1. **WhenEmpty:** Nodes are only considered for disruption (removal or replacement) if they have no running pods. \n2. **WhenEmpty or Underutilized:** Nodes are considered for disruption if they are empty OR if they have been underutilized for a specified duration (defined by `Consolidate After`). \n", "consolidateAfter": "Specify in minutes. Defines how long a node should be idle (underutilized) before it's considered for consolidation (i.e., termination and replacement for cost/performance optimization)." }, "UpdateEfsLifecyclePolicy": { "TransitionToIA": "Select the duration to transition files to the IA storage class.", "TransitionToPrimaryStorageClass": "Enable to transition a file to primary storage." }, "enableEventBridge": { "EnableEventBridge": "Set this setting to allow this bucket to send events to Amazon EventBridge." }, "AddNamespaceQueue": { "Name": "Specify name of the ServiceBus Queue resource.", "MaxSizeInMegabytes": "Select the maximum size of memory allocated for the queue.", "LockDuration": "Specify the amount of time in seconds that the message is locked for other receivers.", "MaxDeliveryCount": "Specify value which controls when a message is automatically dead lettered.", "AutoGrow": "Enable/ Disable to control whether the Queue has dead letter support when a message expires.", "EnablePartitioning": "Enable/ Disable to control whether to enable the queue to be partitioned across multiple message brokers." }, "topicRule": { "Name": "Specify the name of the rule.", "Description": "Add description to the rule.", "SqlSelect_later": "The SQL statement used to query the topic. For SQL Expressions refer [here](https://docs.aws.amazon.com/iot/latest/developerguide/iot-sql-reference.html).", "TopicFilter_later": "TopicFilter for the query. Defaults to `#`.", "SqlWhere_later": "clause determines if the actions specified by a rule are carried out. Refer [here](https://docs.aws.amazon.com/iot/latest/developerguide/iot-sql-where.html).", "SqlVersion": "The version of the SQL rules engine to use when evaluating the rule. Defaults to `2016-03-23`.", "Disabled": "Select to enable or disable the destination.", "Action 1": "Specify actions when a rule is invoked. For more details refer [here](https://docs.aws.amazon.com/iot/latest/developerguide/iot-rule-actions.html).", "UseErrorAction": "Enable to activate the rules engine an error action, if any problem is identified while activating the topic rule.", "Error Action": "Define the error action message. Refer [here](https://docs.aws.amazon.com/iot/latest/developerguide/rule-error-handling.html) for examples." }, "devops-lambda-update": { "S3Bucket": "Select the S3 bucket containing the Lambda function package.", "S3BucketKey": "Specify the key (file path) of the Lambda function package within the selected S3 bucket." }, "requirementForm": { "requirementKey": "Select a Key from the list. If your desired option is not available, choose 'Other'.", "otherKey": "Select a key from the list. If the desired key is not available, select 'Other' and specify the key manually.", "requirementOperator": "Select the desired operator from the list.", "requirementValue": "Select Value. If your desired Value is not listed, type the value manually and click on Add Item." }, "AddSbAccessPolicy": { "Name": "Specify the name of the Authorization Rule.", "canManage": "Enable to configure authorization rule to have manage permissions to the ServiceBus Queue.", "canSend": "Enable to configure authorization rule to have Send permissions to the ServiceBus Queue.", "canListen": "Enable to configure authorization rule to have Listen permissions to the ServiceBus Queue." }, "addCosmosDB": { "Name": "Enter a unique name for your CosmosDB account. Only lowercase letters, numbers, and hyphens are allowed. Min 2, max 40 characters.", "PrimaryKeyName": "Specify the primary key of the table.", "Type": "**HASH** - Select to use as the hash (partition) key.\n**Range** - Select to use as the range (sort) key.\n", "Size": "Select the attribute. Data types allowed for primary key attributes are string or number.", "SortKeyName": "SortKey is applicable for range attribute. Stores item in sorted order by the sort key value.", "SortKeyType": "Select key type. Type supported is String or Number.", "TableName": "Specify a unique name for the table.", "PrimaryKey": "Specify the primary key for the table.", "PrimaryKeyDataType": "Select the data type for the key ('String' or 'Number').", "SortKey": "Optional. Use a sort key to store items in order by this attribute, enabling range queries on the table.", "SortKeyDataType": "Select the data type for the sort key ('String' or 'Number').", "CapacityMode": "Select the capacity mode for your CosmosDB account. Options are 'Provisioned throughput' or 'Serverless'.", "PublicAccess": "Choose whether the database should be publicly accessible or restricted to private endpoints.", "EnableFreeTier": "Enable the free tier for your CosmosDB account to reduce cost if eligible.", "DisableKeyBasedMetadataWriteAccess": "Toggle key-based authentication for metadata write operations. When enabled, it disables key-based writes.", "BackupPolicyType": "Select the backup policy type. 'Periodic' backs up at intervals, 'Continuous' provides continuous backup.", "BackupInterval": "Specify the backup interval in minutes. Must be between 60 and 1440.", "BackupRetention": "Specify how long backups are retained in hours. Must be between 8 and 720.", "BackupStorageRedundancy": "Select the backup storage redundancy type. Options include 'Local', 'Geo', or other supported redundancy modes." }, "vmMaintenanceSchedule": { "StartTime": "Select the date and time when the maintenance schedule should begin.", "TimeZone": "Select the time zone that applies to the maintenance schedule.", "Hour": "Enter the number of hours for the maintenance window.", "Minute": "Enter the number of minutes for the maintenance window.", "Repeat": "Enter how often the maintenance window should repeat per interval.", "RepeatType": "Select the unit of time for the repeat interval. Options are Day, Week, or Month.", "EndTime": "Select the date and time when the maintenance schedule should end, \nor leave blank to run indefinitely." }, "AddGcpSubscription": { "Name": "Name of the subscription.", "Topic": "Select the Pub Sub Topic for reference.", "DeliveryType": "Select the Delivery Type.", "AcknowledgementDeadline": "Default is 10 seconds. This value is the maximum time after a subscriber receives a message before the subscriber should acknowledge the message.", "Schema": "Select the Schema.\n1. **None**: Pub/Sub will write the message bytes to a column called data of the selected BigQuery table.\n2. **Use Topic Schema**: Pub/Sub will use the schema of the attached topic, which must be compatible with the BigQuery table schema.\n3. **Use Table Schema**: Pub/Sub will use the schema of the BigQuery table.\n", "TableName": "Specify the name of the table to which to write data.", "DropUnknownFields": "(Optional). When Enabled, if `Use Topic Schema` or `Use Tpoic Schema` is used, any fields that are a part of the topic schema or message schema that are not part of the BigQuery table schema are dropped when writing to BigQuery.", "WriteMetadata": "(Optional) When Enabled, writes the subscription_name, message_id, publishTime, attributes, and publish_time to additional columns in the table.", "Bucket": "Select the Cloud Storage Bucket.", "PushEndpoint": "Specify a URL locating the endpoint to which messages should be pushed.", "Attributes": "Specify attributes in json format.\n```js\n{\n \"foo\": \"bar\"\n}\n```" }, "AddNodePool": { "Name": "Specify the name for your Node Pool.", "Zones": "Specify the zone for your Node Pool.", "InstanceType": "Select the machine type.", "AutoscalingEnabled": "Enable autoscaler configuration for this node pool. Per zone limits will enforce given limits on a per zone basis.", "UseTotalCount": "Enable to limit total number of nodes independently of the spreading.", "InitialNodeCount": "Specify initial number of nodes.", "MinNodeCount": "Minimum number of nodes in the NodePool. Must be less than or equal to Maximum Node Count.", "MaxNodeCount": "Maximum number of nodes in the NodePool. Must be greater than or equal to Maximum Node Count.", "LocationPolicy": "Select the Location Policy.\n1. **BALANCED:** autoscaler tries to spread the nodes equally among zones.\n2. **ANY:** autoscaler prioritize utilization of unused reservations and to account for current resource availability constraints.\n", "ImageType": "Select the Image Type.", "DiscType": "Select the Disk Type.\n1. **Standard:** Suitable for large data processing workloads that primarily use sequential I/Os.\n2. **Balanced:** This disk type offers performance levels suitable for most general-purpose applications at a price point between that of standard and performance (pd-ssd) persistent disks.\n3. **SSD:** Suitable for enterprise applications and high-performance databases that require lower latency and more IOPS than standard persistent disks provide.\n", "DiscSizeGb": "Specify Boot Disk Type in GB per node.", "Spot": "When specified, the node pool will provision Spot instances. Spot Instances are ideal for fault-tolerant workloads and may be terminated at any time.", "cgroupMode": "Select the Cgroup Policy. Defaults to `Cgroupv2`.", "LoggingConfig": "Select the logging config parameter for specifying the type of logging agent used in a node pool.\n1. **Default:** Select default logging variant.\n2. **Max Throughput:** Maximum logging throughput variant.\n", "Tags": "Enter the Network Tags. Multiple Network tags can be specified.", "AutoRepair": "Whether or not the nodes will be automatically repaired.", "Sysctls": "Specify Linux Node Sysctl. For Sysctl Configuration options, click [here](https://cloud.google.com/kubernetes-engine/docs/how-to/node-system-config#sysctl-options).\nRefer example here.\n```js\n{\n \"net.core.somaxconn\": \"2048\",\n \"net.ipv4.tcp_rmem\": \"4096 87380 6291456\"\n}\n```\n", "ResourceLabels": "Labels are applied to all nodes.\n```js\n {\n \"key\" : \"value\"\n }\n ```\n", "Metadata": "Configure Compute Engine instance metadata.\n ```js\n {\n \"key\" : \"value\"\n }\n ```\n", "UpdateStrategy": "Select Upgrade Strategy. Defaults to Surge upgrade. For more details, click [here](https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-upgrades?&_ga=2.113758275.-535098261.1654188041#surge).\n", "MaxSurge": "Max surge is the maximum number of nodes that can be created beyond the current size of the node pool during the upgrade process.", "MaxUnavailable": "Max unavailable is the maximum number of nodes that can be simultaneously unavailable during the upgrade process. A node is considered available if its status is Ready.", "AllocationTags": "Allocation tags is the simplest way to constraint containers/pods with hosts/nodes. DuploCloud/Kubernetes Orchestrator will make sure containers will run on the hosts having same allocation tags.", "AutoUpgrade": "Enable node auto-upgrade for the node pool. If enabled, node auto-upgrade helps keep the nodes in your node pool up to date with the latest release version of Kubernetes.", "TotalMinNodeCount": "Specify minimum number of all nodes.", "TotalMaxNodeCount": "Specify maximum number of all nodes.", "NodePoolSoakDuration": "Time needed after draining entire blue pool. After this period, blue pool will be cleaned up. A duration in seconds with up to nine fractional digits, ending with 's'. Example- `3.5s`", "BatchNodeCount": "Number of blue nodes to drain in a batch. Only one of the batch_percentage or batch_node_count can be specified.", "BatchPercentage": "Percentage of the bool pool nodes to drain in a batch. The range of this field should be (0.0, 1.0). Only one of the batch_percentage or batch_node_count can be specified.", "BatchSoakDuration": "Enter Soak time after each batch gets drained. A duration in seconds with up to nine fractional digits, ending with 's'. Example- `3.5s`." }, "AddTaskDefBasic": { "Name": "Specify a name for your task definition. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed.", "OperatingSystemArchitecture": "Select the Operating system/Architecture configuration for the task definition.", "memory": "The amount of memory (in MiB) used by the task. It can be expressed as an integer using MiB, for example 1024, or as a string using GB, for example '1GB' or '1 gb'.", "vcpus": "The number of CPU units used by the task. It can be expressed as an integer using CPU units, for example 1024, or as a string using vCPUs, for example '1 vCPU' or '1 vcpu'.", "volumes": "Volumes which can be mounted within container as documented [here](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#volumes) in a json format.\nSample Value with all the possible options can look like as below.\n```js\n[ \n { \n \"dockerVolumeConfiguration\": { \n \"autoprovision\": boolean,\n \"driver\": \"string\",\n \"driverOpts\": { \n \"string\" : \"string\" \n },\n \"labels\": { \n \"string\" : \"string\" \n },\n \"scope\": \"string\"\n },\n \"efsVolumeConfiguration\": { \n \"authorizationConfig\": { \n \"accessPointId\": \"string\",\n \"iam\": \"string\"\n },\n \"fileSystemId\": \"string\",\n \"rootDirectory\": \"string\",\n \"transitEncryption\": \"string\",\n \"transitEncryptionPort\": number\n },\n \"fsxWindowsFileServerVolumeConfiguration\": { \n \"authorizationConfig\": { \n \"credentialsParameter\": \"string\",\n \"domain\": \"string\"\n },\n \"fileSystemId\": \"string\",\n \"rootDirectory\": \"string\"\n },\n \"host\": { \n \"sourcePath\": \"string\"\n },\n \"name\": \"string\"\n }\n]\n```" }, "AddGceVmHost": { "Name": "Specify name for your GCE VM.", "Zone": "Select the zone that the vm should be created in.", "InstanceType": "Select the Machine Type.", "AgentPlatform": "Select container orchestration platform.\n1. **Linux Docker/Native:** Select this option if you want to run docker native services which are Linux based.\n2. **None:** This option has to be selected when EC2 instance is not used for running containers.\n", "ImageId": "Enter the image name. Specify in this format `projects/{project}/global/images/{image}`.", "EnablePublicIpAddress": "Enable to assign public IP address.", "Tags": "Enter Network tags.", "AcceleratorType": "Specify Accelerator Type(GPU Type). Google does not offer every instance/GPU combination in every region/zone.\nEnter the compatible type based on the instance type and zone. For example- `nvidia-tesla-t4` Accelarator Type is supported for `n1-standard-1` Image Type for Zone `us-west4-a`.\nFor GPU regions and zone availability, click [here](https://cloud.google.com/compute/docs/gpus/gpu-regions-zones).\n", "AcceleratorCount": "Specify the number of GPUs.", "Labels": "Specify Labels\n```js\n{\n\"key\" : \"value\"\n}\n```\n", "Metadata": "Configure Compute Engine instance metadata.\n```js\n{\n\"key\" : \"value\"\n}\n```\n", "Base64UserData": "Configure startup-script metadata in below format. ```echo \"Hello from startup script!\" > /test.txt```" }, "addAksCluster": { "clusterName": "Enter the AKS cluster name. Only letters, numbers, hyphens, and underscores allowed.", "PricingTier": "Select the pricing tier for your AKS cluster.", "k8sVersion": "Select the Kubernetes version for the cluster.", "privateCluster": "Choose whether this cluster is private or public.", "NodeCount": "Specify the initial number of nodes in the default node pool.", "MaxPods": "Specify the maximum pods allowed per node in the node pool.", "vmSize": "Select the VM size for cluster nodes.", "EnableWorkloadIdentity": "Enable workload identity for AKS-managed identities.", "EnableImageCleaner": "Enable automatic cleanup of old images on nodes.", "ImageCleanerIntervalInDays": "Specify the frequency in days to clean images if Image Cleaner is enabled.", "EnableAutoScaling": "Enable automatic scaling for the system node pool.", "MinCount": "Specify the minimum number of nodes for auto-scaling.", "MaxCount": "Specify the maximum number of nodes for auto-scaling.", "IsManagedAadEnabled": "Enable Azure Active Directory managed integration.", "IsAzureRbacEnabled": "Enable Azure Role-Based Access Control for this cluster.", "TenantId": "Enter the Azure Active Directory (AAD) Tenant ID where this AKS cluster will be registered. This is required for AAD-managed integration.", "AdminGroupObjectIds": "Enter the object IDs of the Okta or Azure AD groups that should have administrative access to the cluster. Type each ID and press Enter to add it.", "NetworkPlugin": "Choose the networking plugin for the cluster ('Azure' or 'Kubenet').", "NodeResourceGroup": "Enter the Azure Resource Group where node resources are created.", "OutboundType": "Select how outbound traffic from the cluster is managed.", "EnableBlobCsiDriver": "Enable the Blob CSI driver for persistent volume support.", "DisableRunCommand": "Disable Azure Run Command functionality on nodes.", "LinuxAdminUsername": "Enter the Admin username for Linux nodes.", "LinuxSshPublicKey": "Enter the SSH public key for Linux node access.", "AddCriticalTaintToSystemAgentPool": "Add a critical taint to the system agent pool to prevent user workloads from scheduling here." }, "AddPgSql": { "Name": "Enter a unique name for the Redis instance.", "serviceTier": "Select the service tier for the Redis instance.", "nonTlsOption": "Choose whether the non-SSL port is enabled.", "PublicAccess": "Specify whether the Redis instance should be publicly accessible" }, "cwrule-target-lambda-add": { "Lambda Function Name": "Select the Lambda function." }, "taskdef": { "Name": "Specify a name for your task definition. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed.", "Image": "Image for your docker container.", "memory": "The amount of memory (in MiB) used by the task. It can be expressed as an integer using MiB, for example 1024, or as a string using GB, for example '1GB' or '1 gb'.", "vcpus": "The number of CPU units used by the task. It can be expressed as an integer using CPU units, for example 1024, or as a string using vCPUs, for example '1 vCPU' or '1 vcpu'.", "Port": "Port mappings allow containers to access ports on the host container instance to send or receive traffic.", "Protocol": "Protocol for this port.", "environmentvars": "Environment variables to be passed to the container in the JSON format as below.\n```js\n[\n {\n \"Name\": \"\",\n \"Value\": \"\"\n }\n]\n```\n", "command": "The command that is passed to the container. This parameter maps to **Cmd** in the Create a container section of the Docker Remote API and the COMMAND parameter to docker run. For more information about the Docker CMD parameter, click[here](https://docs.docker.com/engine/reference/builder/#cmd).\nFollowing is the example value to make you container sleep for debugging.\n```js\n[\n \"sleep\",\n \"500000\"\n]\n```\n", "healthcheck": "Health check configuration which helps to determine if container is healthy in a JSON format. JSON has following attributes.\n1. **command**: A string array representing the command that the container runs to determine if it is healthy. The string array can start with CMD to execute the command arguments directly, or CMD-SHELL to run the command with the container's default shell. If neither is specified, CMD is used by default.\n2. **interval**: The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds.\n3. **timeout**: The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5 seconds.\n4. **retries**: The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is three retries.\n5. **startPeriod**: The optional grace period within which to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You may specify between 0 and 300 seconds. The startPeriod is disabled by default.\nFollowing is the example to perform health check by calling an API\n```js\n{\n \"command\" : [ \"CMD-SHELL\", \"curl -f http://localhost/ || exit 1\" ],\n \"interval\": 20,\n \"timeout\" : 5,\n \"retries\" : 10,\n \"startPeriod\" : 20\n}\n```\n", "Secret": "This is another way of setting up the environment values from AWS secrets in a JSON format. \n```js\n[\n {\n \"Name\": \"\",\n \"ValueFrom\": \":::\"\n },\n {\n \"Name\": \"DB_HOST\",\n \"ValueFrom\": \"arn:aws:secretsmanager:us-west-2:2432432434343:secret:db-secret:DB_HOST::\"\n }\n]\n```\n", "containerotherconfig": "All other advance properties documented [here](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_DeregisterTaskDefinition.html) applicable only for the Fargate in a JSON format.\nSample value can look like below.\n```js\n{\n \"LogConfiguration\": {\n \"LogDriver\": {\n \"Value\": \"awslogs\"\n },\n \"Options\": {\n \"awslogs-create-group\": \"true\",\n \"awslogs-group\": \"/ecs/duploservices-nonprod-api\",\n \"awslogs-region\": \"us-west-2\",\n \"awslogs-stream-prefix\": \"ecs\"\n },\n \"SecretOptions\": []\n }\n}\n```\n", "volumes": "Volumes which can be mounted within container as documented [here](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#volumes) in a json format.\nSample Value with all the possible options can look like as below.\n```js\n[ \n { \n \"dockerVolumeConfiguration\": { \n \"autoprovision\": boolean,\n \"driver\": \"string\",\n \"driverOpts\": { \n \"string\" : \"string\" \n },\n \"labels\": { \n \"string\" : \"string\" \n },\n \"scope\": \"string\"\n },\n \"efsVolumeConfiguration\": { \n \"authorizationConfig\": { \n \"accessPointId\": \"string\",\n \"iam\": \"string\"\n },\n \"fileSystemId\": \"string\",\n \"rootDirectory\": \"string\",\n \"transitEncryption\": \"string\",\n \"transitEncryptionPort\": number\n },\n \"fsxWindowsFileServerVolumeConfiguration\": { \n \"authorizationConfig\": { \n \"credentialsParameter\": \"string\",\n \"domain\": \"string\"\n },\n \"fileSystemId\": \"string\",\n \"rootDirectory\": \"string\"\n },\n \"host\": { \n \"sourcePath\": \"string\"\n },\n \"name\": \"string\"\n }\n]\n```\n", "Essential_1": "If enabled, container fails or stops for any reason, all other containers that are part of the task are stopped. \nIf disabled, then its failure doesn't affect the rest of the containers in a task.\n", "OperatingSystemArchitecture": "Select the Operating system/Architecture configuration for the task definition." }, "dataPipelineAdd": { "Name": "Unique Data Pipeline Name", "PipeLineDef": "Please provide Data Pipeline defination json. Provide EmrCluster details If using existing EmrCluster.\n```js\n{\n\"PipelineObjects\": [\n{\n\"Id\": \"Default\",\n\"Name\": \"Default\",\n\"Fields\": [\n{\n\"Key\": \"failureAndRerunMode\",\n\"StringValue\": \"CASCADE\"\n},\n{\n\"Key\": \"pipelineLogUri\",\n\"StringValue\": \"s3://YOUR-S3-FOLDER/logs/data-pipelines/\"\n},\n{\n\"Key\": \"scheduleType\",\n\"StringValue\": \"cron\"\n}\n]\n},\n{\n\"Id\": \"EmrConfigurationId_Q9rpL\",\n\"Name\": \"DefaultEmrConfiguration1\",\n\"Fields\": [\n{\n\"Key\": \"configuration\",\n\"RefValue\": \"EmrConfigurationId_LFzOl\"\n},\n{\n\"Key\": \"type\",\n\"StringValue\": \"EmrConfiguration\"\n},\n{\n\"Key\": \"classification\",\n\"StringValue\": \"spark-env\"\n}\n]\n},\n{\n\"Id\": \"ActionId_SUEgm\",\n\"Name\": \"TriggerNotificationOnFail\",\n\"Fields\": [\n{\n\"Key\": \"subject\",\n\"StringValue\": \"Backcountry-clickstream-delta-hourly: #{node.@pipelineId} Error: #{node.errorMessage}\"\n},\n{\n\"Key\": \"message\",\n\"StringValue\": \"Backcountry-clickstream-delta-hourly failed to run\"\n},\n{\n\"Key\": \"type\",\n\"StringValue\": \"SnsAlarm\"\n},\n{\n\"Key\": \"topicArn\",\n\"StringValue\": \"arn:aws:sns:us-west-2:269378226633:duploservices-pravin-test-del77-128329325849\"\n}\n]\n},\n{\n\"Id\": \"EmrActivityObj\",\n\"Name\": \"EmrActivityObj\",\n\"Fields\": [\n{\n\"Key\": \"schedule\",\n\"RefValue\": \"ScheduleId_NfOUF\"\n},\n{\n\"Key\": \"step\",\n\"StringValue\": \"#{myEmrStep}\"\n},\n{\n\"Key\": \"runsOn\",\n\"RefValue\": \"EmrClusterObj\"\n},\n{\n\"Key\": \"type\",\n\"StringValue\": \"EmrActivity\"\n}\n]\n},\n{\n\"Id\": \"EmrConfigurationId_LFzOl\",\n\"Name\": \"DefaultEmrConfiguration2\",\n\"Fields\": [\n{\n\"Key\": \"property\",\n\"RefValue\": \"PropertyId_NA18c\"\n},\n{\n\"Key\": \"classification\",\n\"StringValue\": \"export\"\n},\n{\n\"Key\": \"type\",\n\"StringValue\": \"EmrConfiguration\"\n}\n]\n},\n{\n\"Id\": \"EmrClusterObj\",\n\"Name\": \"EmrClusterObj\",\n\"Fields\": [\n{\n\"Key\": \"taskInstanceType\",\n\"StringValue\": \"#{myTaskInstanceType}\"\n},\n{\n\"Key\": \"onFail\",\n\"RefValue\": \"ActionId_SUEgm\"\n},\n{\n\"Key\": \"maximumRetries\",\n\"StringValue\": \"1\"\n},\n{\n\"Key\": \"configuration\",\n\"RefValue\": \"EmrConfigurationId_Q9rpL\"\n},\n{\n\"Key\": \"coreInstanceCount\",\n\"StringValue\": \"#{myCoreInstanceCount}\"\n},\n{\n\"Key\": \"masterInstanceType\",\n\"StringValue\": \"#{myMasterInstanceType}\"\n},\n{\n\"Key\": \"releaseLabel\",\n\"StringValue\": \"#{myEMRReleaseLabel}\"\n},\n{\n\"Key\": \"type\",\n\"StringValue\": \"EmrCluster\"\n},\n{\n\"Key\": \"terminateAfter\",\n\"StringValue\": \"3 Hours\"\n},\n{\n\"Key\": \"bootstrapAction\",\n\"StringValue\": \"#{myBootstrapAction}\"\n},\n{\n\"Key\": \"taskInstanceCount\",\n\"StringValue\": \"#{myTaskInstanceCount}\"\n},\n{\n\"Key\": \"coreInstanceType\",\n\"StringValue\": \"#{myCoreInstanceType}\"\n},\n{\n\"Key\": \"applications\",\n\"StringValue\": \"spark\"\n}\n]\n},\n{\n\"Id\": \"ScheduleId_NfOUF\",\n\"Name\": \"Every 10 hr\",\n\"Fields\": [\n{\n\"Key\": \"period\",\n\"StringValue\": \"10 Hours start time 2\"\n},\n{\n\"Key\": \"startDateTime\",\n\"StringValue\": \"2022-01-07T21:21:00\"\n},\n{\n\"Key\": \"type\",\n\"StringValue\": \"Schedule\"\n},\n{\n\"Key\": \"endDateTime\",\n\"StringValue\": \"2022-01-08T15:44:28\"\n}\n]\n},\n{\n\"Id\": \"PropertyId_NA18c\",\n\"Name\": \"DefaultProperty1\",\n\"Fields\": [\n{\n\"Key\": \"type\",\n\"StringValue\": \"Property\"\n},\n{\n\"Key\": \"value\",\n\"StringValue\": \"/usr/bin/python3\"\n},\n{\n\"Key\": \"key\",\n\"StringValue\": \"PYSPARK_PYTHON\"\n}\n]\n}\n],\n\"ParameterValues\": [\n{\n\"Id\": \"myEMRReleaseLabel\",\n\"StringValue\": \"emr-6.1.0\"\n},\n{\n\"Id\": \"myMasterInstanceType\",\n\"StringValue\": \"m3.xlarge\"\n},\n{\n\"Id\": \"myBootstrapAction\",\n\"StringValue\": \"s3://YOUR-S3-FOLDER/bootstrap_actions/your_boottrap_and_python_lib_installer.sh\"\n},\n{\n\"Id\": \"myEmrStep\",\n\"StringValue\": \"command-runner.jar,spark-submit,--packages,io.delta:delta-core_2.12:0.8.0,--conf,spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension,--conf,spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog,--num-executors,2,--executor-cores,2,--executor-memory,2G,--conf,spark.driver.memoryOverhead=4096,--conf,spark.executor.memoryOverhead=4096,--conf,spark.dynamicAllocation.enabled=false,--name,PixelClickstreamData,--py-files,s3://YOUR-S3-FOLDER/libraries/librariy1.zip,--py-files,s3://YOUR-S3-FOLDER/libraries/librariy2.zip,s3://YOUR-S3-FOLDER/your_script.py, your_script_arg1, your_script_arg2\"\n},\n{\n\"Id\": \"myEmrStep\",\n\"StringValue\": \"command-runner.jar,aws,athena,start-query-execution,--query-string,MSCK REPAIR TABLE your_database.your_table,--result-configuration,OutputLocation=s3://YOUR-S3-FOLDER/logs/your_query_parquest\"\n},\n{\n\"Id\": \"myCoreInstanceType\",\n\"StringValue\": \"m3.xlarge\"\n},\n{\n\"Id\": \"myCoreInstanceCount\",\n\"StringValue\": \"1\"\n}\n],\n\"ParameterObjects\": [\n{\n\"Id\": \"myEC2KeyPair\",\n\"Attributes\": [\n{\n\"Key\": \"helpText\",\n\"StringValue\": \"An existing EC2 key pair to SSH into the master node of the EMR cluster as the user \\\"hadoop\\\".\"\n},\n{\n\"Key\": \"description\",\n\"StringValue\": \"EC2 key pair\"\n},\n{\n\"Key\": \"optional\",\n\"StringValue\": \"true\"\n},\n{\n\"Key\": \"type\",\n\"StringValue\": \"String\"\n}\n]\n},\n{\n\"Id\": \"myEmrStep\",\n\"Attributes\": [\n{\n\"Key\": \"helpLink\",\n\"StringValue\": \"https://docs.aws.amazon.com/console/datapipeline/emrsteps\"\n},\n{\n\"Key\": \"watermark\",\n\"StringValue\": \"s3://myBucket/myPath/myStep.jar,firstArg,secondArg\"\n},\n{\n\"Key\": \"helpText\",\n\"StringValue\": \"A step is a unit of work you submit to the cluster. You can specify one or more steps\"\n},\n{\n\"Key\": \"description\",\n\"StringValue\": \"EMR step(s)\"\n},\n{\n\"Key\": \"isArray\",\n\"StringValue\": \"true\"\n},\n{\n\"Key\": \"type\",\n\"StringValue\": \"String\"\n}\n]\n},\n{\n\"Id\": \"myTaskInstanceType\",\n\"Attributes\": [\n{\n\"Key\": \"helpText\",\n\"StringValue\": \"Task instances run Hadoop tasks.\"\n},\n{\n\"Key\": \"description\",\n\"StringValue\": \"Task node instance type\"\n},\n{\n\"Key\": \"optional\",\n\"StringValue\": \"true\"\n},\n{\n\"Key\": \"type\",\n\"StringValue\": \"String\"\n}\n]\n},\n{\n\"Id\": \"myCoreInstanceType\",\n\"Attributes\": [\n{\n\"Key\": \"default\",\n\"StringValue\": \"m1.medium\"\n},\n{\n\"Key\": \"helpText\",\n\"StringValue\": \"Core instances run Hadoop tasks and store data using the Hadoop Distributed File System (HDFS).\"\n},\n{\n\"Key\": \"description\",\n\"StringValue\": \"Core node instance type\"\n},\n{\n\"Key\": \"type\",\n\"StringValue\": \"String\"\n}\n]\n},\n{\n\"Id\": \"myEMRReleaseLabel\",\n\"Attributes\": [\n{\n\"Key\": \"default\",\n\"StringValue\": \"emr-5.13.0\"\n},\n{\n\"Key\": \"helpText\",\n\"StringValue\": \"Determines the base configuration of the instances in your cluster, including the Hadoop version.\"\n},\n{\n\"Key\": \"description\",\n\"StringValue\": \"EMR Release Label\"\n},\n{\n\"Key\": \"type\",\n\"StringValue\": \"String\"\n}\n]\n},\n{\n\"Id\": \"myCoreInstanceCount\",\n\"Attributes\": [\n{\n\"Key\": \"default\",\n\"StringValue\": \"2\"\n},\n{\n\"Key\": \"description\",\n\"StringValue\": \"Core node instance count\"\n},\n{\n\"Key\": \"type\",\n\"StringValue\": \"Integer\"\n}\n]\n},\n{\n\"Id\": \"myTaskInstanceCount\",\n\"Attributes\": [\n{\n\"Key\": \"description\",\n\"StringValue\": \"Task node instance count\"\n},\n{\n\"Key\": \"optional\",\n\"StringValue\": \"true\"\n},\n{\n\"Key\": \"type\",\n\"StringValue\": \"Integer\"\n}\n]\n},\n{\n\"Id\": \"myBootstrapAction\",\n\"Attributes\": [\n{\n\"Key\": \"helpLink\",\n\"StringValue\": \"https://docs.aws.amazon.com/console/datapipeline/emr_bootstrap_actions\"\n},\n{\n\"Key\": \"helpText\",\n\"StringValue\": \"Bootstrap actions are scripts that are executed during setup before Hadoop starts on every cluster node.\"\n},\n{\n\"Key\": \"description\",\n\"StringValue\": \"Bootstrap action(s)\"\n},\n{\n\"Key\": \"isArray\",\n\"StringValue\": \"true\"\n},\n{\n\"Key\": \"optional\",\n\"StringValue\": \"true\"\n},\n{\n\"Key\": \"type\",\n\"StringValue\": \"String\"\n}\n]\n},\n{\n\"Id\": \"myMasterInstanceType\",\n\"Attributes\": [\n{\n\"Key\": \"default\",\n\"StringValue\": \"m1.medium\"\n},\n{\n\"Key\": \"helpText\",\n\"StringValue\": \"The Master instance assigns Hadoop tasks to core and task nodes, and monitors their status.\"\n},\n{\n\"Key\": \"description\",\n\"StringValue\": \"Master node instance type\"\n},\n{\n\"Key\": \"type\",\n\"StringValue\": \"String\"\n}\n]\n}\n]\n}\n```", "RootActivity#scheduleType": "Valid Values are cron, ondemand, timeseries.\n[Click Here](https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-concepts-schedules.html)\n[Click Here](https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-schedule.html)\n\n", "EmrActivity#step": "EmrActivity steps could be any supported steps/jobs like - spark-submit, hive, pig, athena ... etc.\n[Click Here](https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-emractivity.html)\n\n```js\n[ \"command-runner.jar,spark-submit,--packages,io.delta:delta-core_2.12:0.8.0,--conf,spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension,--conf,spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog,--num-executors,2,--executor-cores,2,--executor-memory,2G,--conf,spark.driver.memoryOverhead=4096,--conf,spark.executor.memoryOverhead=4096,--conf,spark.dynamicAllocation.enabled=false,--name,PixelClickstreamData,--py-files,s3://YOUR-S3-FOLDER/libraries/librariy1.zip,--py-files,s3://YOUR-S3-FOLDER/libraries/librariy2.zip,s3://YOUR-S3-FOLDER/your_script.py, your_script_arg1, your_script_arg2\",\n\"command-runner.jar,aws,athena,start-query-execution,--query-string,MSCK REPAIR TABLE your_database.your_table,--result-configuration,OutputLocation=s3://YOUR-S3-FOLDER/logs/your_query_parquest\"\n]\n```\n", "EmrCluster#onFail": "EmrCluster onFail .e.g SNS message.\n[Click Here](https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-snsalarm.html)\n\n```js\n[{\n\"id\" : \"SuccessNotify\",\n\"name\" : \"SuccessNotify\",\n\"type\" : \"SnsAlarm\",\n\"topicArn\" : \"arn:aws:sns:us-east-1:28619EXAMPLE:ExampleTopic\",\n\"subject\" : \"COPY SUCCESS: #{node.@scheduledStartTime}\",\n\"message\" : \"Files were copied from #{node.input} to #{node.output}.\"\n}]\n```\n", "EmrCluster#configuration": "EmrCluster configuration.\n[Click Here](https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-emractivity.html)\n\n```js\n[\n{\n\"classification\": \"core-site\",\n\"properties\": {\n\"io.file.buffer.size\": \"4096\",\n\"fs.s3.block.size\": \"67108864\"\n}\n},\n{\n\"classification\": \"hadoop-env\",\n\"properties\": {\n\n},\n\"configurations\": [\n{\n\"classification\": \"export\",\n\"properties\": {\n\"YARN_PROXYSERVER_HEAPSIZE\": \"2396\"\n}\n}\n]\n},\n{\n\"Classification\": \"spark\",\n\"Properties\": {\n\"maximizeResourceAllocation\": \"true\"\n}\n},\n{\n\"Classification\": \"spark-hive-site\",\n\"Properties\": {\n\"hive.metastore.client.factory.class\": \"com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory\",\n\"hive.metastore.glue.catalogid\": \"acct-id\"\n}\n},\n{\n\"Classification\": \"spark-env\",\n\"Properties\": {\n\n},\n\"Configurations\": [\n{\n\"Classification\": \"export\",\n\"Properties\": {\n\"PYSPARK_PYTHON\": \"/usr/bin/python34\"\n},\n\"Configurations\": [\n\n]\n}\n]\n},\n{\n\"Classification\": \"spark-defaults\",\n\"Properties\": {\n\"spark.yarn.appMasterEnv.PYSPARK_PYTHON\": \"/home/hadoop/venv/bin/python3.4\",\n\"spark.executor.memory\": \"2G\"\n}\n},\n{\n\"Classification\": \"emrfs-site\",\n\"Properties\": {\n\"fs.s3.enableServerSideEncryption\":true\n}\n}\n]\n```\n", "EmrCluster#applicatiob": "EmrCluster application.\n[Click Here](https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-emractivity.html)\n\n```js\n[\n\"Spark\",\n\"Hadoop\",\n\"Pig\",\n\"Hive\",\n\"Jupyterlab\",\n]\n```" }, "AddServiceDeployment": { "ServiceName": "Any friendly name to identify your service.", "AllocationTag": "A string value that needs to be a substring of the corresponding allocation tag value set on a Host. If there is no host with such an allocation tag value then service will not get deployed. simplest example is if you want the container to be deployed in a certain set of hosts only then set an allocation tag on those hosts called web and then set the allocation tag here to be web.", "ReplicationStrategy": "Replication Strategy can be used to manage the replica count for this Deployment/StatefulSet. Replication Strategy has three options as below.\n 1. **Static**: This option can be used to set fixed count for the replicas.\n 2. **Daemonset**: If this option is selected, DuploCloud will make sure pods will run on every host created in this tenant.\n 3. **Horizontal Pod Autoscaler**: This is more advanced option which user can use to automcatically scale up or down the replicas as per the different metrics like CPU Usage, Memory Usage, Ingress requests per second etc. [Click Here](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) for more information on *Horizontal Pod Autoscaler*.\n", "Replicas": "Specify number of replicas.", "ReplicaPlacement": "Select replication placement strategy.\n 1. **First Available**: Replicas are not required to be spread across availability zones. Replicas prefer to be scheduled on different hosts, but this is not required.\n 2. **Place on Different Hosts**: Replicas are not required to be spread across availability zones. Replicas are required to be placed on different hosts.\n 3. **Spread Across Zones**: Replicas are required to be spread across availability zones. Replicas prefer to be scheduled on different hosts, but this is not required.\n 4. **Different Hosts and Spread Across Zones**: Replicas are required to be spread across availability zones. Replicas are required to be placed on different hosts.\n", "TolerateSpotInstance": "Select to run pods in Spot Instance. `tolerations` property will be added in `Other Container Config` to support running of pods in Spot Instance.", "DeploymentLabelKey": "Enter Label Key. example `tier`.", "DeploymentLabelValue": "Enter Label Value. example `frontend`.", "DeploymentAnnotationKey": "Enter Annotation Key. example `description`.", "DeploymentAnnotationValue": "Enter Annotation Value. example `Deployment for the MyApp frontend service`.", "DeploymentStrategy": "Deployment Strategy specifications as documented [here](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment). Sample value can be like below.\n```yaml\n\nRollingUpdate:\nMaxSurge: 1\nMaxUnavailable: 0\n```", "HPAConfig": "Horizontal pod autoscaler specifications as documented [here](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/). Sample value can be like below.\n```yaml\n\nmaxReplicas: 5\nmetrics:\n - resource:\n name: cpu\n target:\n averageUtilization: 80\n type: Utilization\n type: Resource\nminReplicas: 2\n```" }, "addECR": { "Name": "Specify name of the ECR.", "KmsEncryption": "Select KMS key for the repository.", "EnableTagImmutability": "Select to enable the tag mutability setting for the repository.", "EnableScanImageOnPush": "Select to scan images after being pushed to the repository" }, "AddK8sDaemonset": { "Name": "Specify the name of the Daemonset, must be unique", "IsTenantLocal": "When set to true the daemon set will deploy only in hosts within this tenant as against an entire cluster", "Configuration": "Add Daemonset Configurations." }, "devops-async-update": { "MaximumRetryAttempts": "Enter the number of times to retry the function if it fails. Valid values are 0, 1, or 2.", "MaximumEventAgeInSeconds": "Enter the maximum age of events in seconds. Must be between 60 and 21600. Events older than this will be discarded." }, "AddResourceQuota": { "name": "Resource Quota Name", "cpu": "This input can be used to limit the CPU consumed by all pods in a non-terminal state. \nFor 5 CPUs value should be **5**, for half CPU value should be **500m**. \n", "memory": "This input can be used to limit the memory consumed by all pods in a non-terminal. \nFor 512Mb value should be **512Mi**, For 5GB of ram value should be **5Gi**. \n", "otherLimits": "You can use this input to all other quotas like storage, Object count etc. Value has to be provided in YAML format like below. \n```yaml\nrequests.storage: 500Gi\npersistentvolumeclaims: 10\n```\n", "scopeSelector": "Each quota can have an associated set of scopes. A quota will only measure usage for a resource if it matches the intersection of enumerated scopes. \nWhen a scope is added to the quota, it limits the number of resources it supports to those that pertain to the scope. Resources specified on the quota outside of the allowed set results in a validation error. \nValue has to be provided in YAML format as below. \n```yaml\nmatchExpressions:\n- operator: In\n scopeName: PriorityClass\n values:\n - middle\n```" }, "appcontainerappenv": { "Name": "Enter the name of the Container App Environment.", "WorkloadProfiles0": "Select a Workload Profile for this environment.", "workloadName": "Enter a name for the individual Workload Profile (max 15 chars).", "InstanceMinCount": "Specify the minimum number of instances for this profile.", "InstanceMaxCount": "Specify the maximum number of instances for this profile.", "appLogDestinations": "Select where logs should be sent.", "logAnalyticsWorkspaces": "Select the Log Analytics Workspace(s) for monitoring this environment.", "SubnetId": "Select the subnet for this environment." }, "AddPVC": { "name": "Name for Persistent Volume Claim.", "storageClassName": "Storage Class Name for the Persistent Volume Claim to be created.", "volumeName": "Provide Volume Name to claim existing PV.", "volumeMode": "Kubernetes supports two volumeModes of PersistentVolumes: `Filesystem` and `Block`. \nVolume Mode is an optional API parameter. `Filesystem` is the default mode used when volumeMode parameter is omitted. \nA volume with volumeMode: `Filesystem` is mounted into Pods into a directory. If the volume is backed by a block device and the device is empty, Kubernetes creates a filesystem on the device before mounting it for the first time. \nYou can set the value of volumeMode to `Block` to use a volume as a raw block device. Such volume is presented into a Pod as a block device, without any filesystem on it. This mode is useful to provide a Pod the fastest possible way to access a volume, without any filesystem layer between the Pod and the volume. \n", "accessModes": "A PersistentVolume can be mounted on a host in any way supported by the resource provider. As shown below, providers will have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities.\nThe access modes are:\n1. **ReadWriteOnce**\nthe volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node.\n2. **ReadOnlyMany**\nthe volume can be mounted as read-only by many nodes.\n3. **ReadWriteMany**\nthe volume can be mounted as read-write by many nodes.\n4. **ReadWriteOncePod**\nthe volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod access mode if you want to ensure that only one pod across whole cluster can read that PVC or write to it. This is only supported for CSI volumes and Kubernetes version 1.22+.\n", "resources": "Claims, like Pods, can request specific quantities of a resource. In this case, the request is for storage. The same **[resource model]**(https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/resources.md) applies to both volumes and claims.\nSample Value\n```yml\nrequests:\n storage: 10Gi\n```\n", "pvcAnnotations": "Kubernetes annotations in key value format. Sample value is like below\n```yml\nkey1: value1\nkey2: value2\n```\n", "pvcLabels": "Kubernetes labels in key value format. Sample value is like below\n```yml\nkey1: value1\nkey2: value2\n```" }, "AddK8sOciRepo": { "Name": "Name of your repository.", "URL": "Specify OCI repository url. Example `oci://registry-1.docker.io/bitnamicharts/nginx`", "tag": "Specify the tag of the image to pull.", "mediaType": "Specify which layer should be extracted from the OCI Artifact.", "operation": "Select the operation from the dropdown,", "Secret": "Select the `dockerconfigjson` secret created from Kubernetes Secret to authenticate to OCI registry." }, "keyVault": { "Name": "Enter a unique name for the Key Vault. The name must be 3\u201324 characters long and can contain only letters, numbers, and hyphens.", "SKUPricing": "Select the pricing tier for the Key Vault. The Standard SKU is suitable for most use cases.", "PurgeProtection": "Enables purge protection for the Key Vault. Once enabled, this setting cannot be disabled and prevents permanent deletion of the vault.", "RetentionDays": "Specify the number of days to retain soft-deleted Key Vault items before permanent deletion. Valid values range from 7 to 90 days." }, "AddSbPrivateEndpoint": { "Name": "Specify Private EndPoint name.", "SubnetId": "Select Subnet." }, "AddTaskDefAdvanced": { "ContainerName_0": "Name of the Container.", "Port": "Port mappings allow containers to access ports on the host container instance to send or receive traffic.", "Protocol": "Protocol for this port.", "Image": "Image for your docker container.", "environmentvars": "Environment variables to be passed to the container in the JSON format as below.\n```js\n[\n {\n \"Name\": \"\",\n \"Value\": \"\"\n }\n]\n```\n", "command": "The command that is passed to the container. This parameter maps to **Cmd** in the Create a container section of the Docker Remote API and the COMMAND parameter to docker run. For more information about the Docker CMD parameter, click[here](https://docs.docker.com/engine/reference/builder/#cmd).\nFollowing is the example value to make you container sleep for debugging.\n```js\n[\n \"sleep\",\n \"500000\"\n]\n```\n", "healthcheck": "Health check configuration which helps to determine if container is healthy in a JSON format. JSON has following attributes.\n1. **command**: A string array representing the command that the container runs to determine if it is healthy. The string array can start with CMD to execute the command arguments directly, or CMD-SHELL to run the command with the container's default shell. If neither is specified, CMD is used by default.\n2. **interval**: The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds.\n3. **timeout**: The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5 seconds.\n4. **retries**: The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is three retries.\n5. **startPeriod**: The optional grace period within which to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You may specify between 0 and 300 seconds. The startPeriod is disabled by default.\nFollowing is the example to perform health check by calling an API\n```js\n{\n \"command\" : [ \"CMD-SHELL\", \"curl -f http://localhost/ || exit 1\" ],\n \"interval\": 20,\n \"timeout\" : 5,\n \"retries\" : 10,\n \"startPeriod\" : 20\n}\n```\n", "Secret": "This is another way of setting up the environment values from AWS secrets in a JSON format. \n```js\n[\n {\n \"Name\": \"\",\n \"ValueFrom\": \":::\"\n },\n {\n \"Name\": \"DB_HOST\",\n \"ValueFrom\": \"arn:aws:secretsmanager:us-west-2:2432432434343:secret:db-secret:DB_HOST::\"\n }\n]\n```\n", "containerotherconfig": "All other advance properties documented [here](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_DeregisterTaskDefinition.html) applicable only for the Fargate in a JSON format.\nSample value can look like below.\n```js\n{\n \"LogConfiguration\": {\n \"LogDriver\": {\n \"Value\": \"awslogs\"\n },\n \"Options\": {\n \"awslogs-create-group\": \"true\",\n \"awslogs-group\": \"/ecs/duploservices-nonprod-api\",\n \"awslogs-region\": \"us-west-2\",\n \"awslogs-stream-prefix\": \"ecs\"\n },\n \"SecretOptions\": []\n }\n}\n```\n", "volumes": "Volumes which can be mounted within container as documented [here](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#volumes) in a json format.\nSample Value with all the possible options can look like as below.\n```js\n[ \n { \n \"dockerVolumeConfiguration\": { \n \"autoprovision\": boolean,\n \"driver\": \"string\",\n \"driverOpts\": { \n \"string\" : \"string\" \n },\n \"labels\": { \n \"string\" : \"string\" \n },\n \"scope\": \"string\"\n },\n \"efsVolumeConfiguration\": { \n \"authorizationConfig\": { \n \"accessPointId\": \"string\",\n \"iam\": \"string\"\n },\n \"fileSystemId\": \"string\",\n \"rootDirectory\": \"string\",\n \"transitEncryption\": \"string\",\n \"transitEncryptionPort\": number\n },\n \"fsxWindowsFileServerVolumeConfiguration\": { \n \"authorizationConfig\": { \n \"credentialsParameter\": \"string\",\n \"domain\": \"string\"\n },\n \"fileSystemId\": \"string\",\n \"rootDirectory\": \"string\"\n },\n \"host\": { \n \"sourcePath\": \"string\"\n },\n \"name\": \"string\"\n }\n]\n```\n", "Essential_1": "If enabled, container fails or stops for any reason, all other containers that are part of the task are stopped. \nIf disabled, then its failure doesn't affect the rest of the containers in a task." }, "AddSQS": { "Name": "The name of the queue.", "QueueType": "Select AWS supported standard or FIFO queues.", "MessageRetentionPeriod": "The number of seconds Amazon SQS retains a message, from 60 (1 minute) to 1209600 (14 days).", "VisibilityTimeout": "The visibility timeout for the queue in seconds. Inputs allowed from 0 to 43200 (12 hours).", "ContentBasedDuplication": "Enables content-based deduplication for FIFO queues.", "DuplicationScope": "Specifies whether message deduplication occurs at the message group or queue level.", "FIFOthroughput": "Specifies whether the FIFO queue throughput quota applies to the entire queue or per message group.", "DelaySeconds": "Set the Delivery Delay when sending a message to a queue in seconds. Default is `0`. Maximum delay can be set is of 15 minutes (`900` seconds).", "needsDeadLetterQueueConfiguration": "Enable Dead-letter queue configuration.", "DeadLetterTargetQueueName": "Select the target SQS.", "MaxMessageTimesReceivedBeforeDeadLetterQueue": "Enter Maximum receives. Value should be between 1 and 1000." }, "addKeyVaultSecret": { "name": "Specify the name of the Key Vault Secret.", "value": "Specify the value of the Key Vault Secret", "contentType": "Specify description of the secret contents (e.g. password, connection string, etc)." }, "airflow_add_sub_form_configure": { "EnvironmentClass_other": "Environment class for the cluster.", "Schedulers": "The number of schedulers that you want to run in your environment. v2.0.2 and above accepts `2` - `5`, default `2`. v1.10.12 accepts `1`.", "MinWorkers": "The minimum number of workers that you want to run in your environment.", "MaxWorkers": "The maximum number of workers that can be automatically scaled up. Value need to be between `1` and `25`.", "AirflowConfigurationOptions_str": "Specify parameters to override airflow options.\nSample Value can be as below\n```js\n{\n \"core.log_format\": \"[%%(asctime)s] {{%%(filename)s:%%(lineno)d}} %%(levelname)s - %%(message)s\"\n}\n```\n" }, "emr_serverless_add_sub_form_pre_capacity": { "DriverCount": "Specify initialized Driver Capacity.", "ExecutorCount": "Specify initialized Executors Capacity.", "DriverCpu": "Specify vCPU per driver.", "DriverMemory": "Specify Memory in GB per driver.", "DriverDisk": "Specify Disk in GB per driver.", "ExecutorCpu": "Specify vCPU per executor.", "ExecutorMemory": "Specify Memory in GB per executor.", "ExecutorDisk": "Specify Disk in GB per executor." }, "AddK8sHelmRepo": { "Name": "Specify the name of the Helm Repository.", "IntervalTime": "Specify the time interval for automatic updates in MM:SS.", "URL": "Specify the Helm Repository URL. For example- `https://helm.github.io/examples`.", "SourceType": "Select the sourcing repository. [Details](https://fluxcd.io/flux/components/source/helmrepositories/).", "Secret": "Select a Kubernetes `dockerconfigjson` secret used to authenticate to a private container registry." }, "rdsRestorePointTime": { "TargetName": "Specify the rds identifier name. Should be unique across all tenants.", "latestRestorableTime": "Select to restore DB from the latest backup time.", "type": "Select Custom, enter the date and time to which you want to restore the DB instance." }, "emr_serverless_add_sub_form_basics": { "Name": "The name of the application.", "ReleaseLabel_other": "The EMR release version associated with the application.", "Type": "The type of application you want to start, such as `spark` or `hive`.", "Architecture": "The CPU architecture of an application. Valid values are `arm64 new` or `x86_64`. Default value is `x86_64`.", "DriverImageUri": "To use the custom Image URI for drivers in your application, specify the ECR location of the image `account-id.dkr.ecr.region-id.amazonaws.com/your-Amazon-ECR-repo-name[:tag] or [@sha]`.\nMust be compatible with the selected EMR release [6.9.0 and above] and located in the same region.\n", "ExecutorImageUri": "To use the custom Image URI for executors, specify the ECR location of the image `account-id.dkr.ecr.region-id.amazonaws.com/your-Amazon-ECR-repo-name[:tag] or [@sha]`.\nMust be compatible with the selected EMR release [6.9.0 and above] and located in the same region.\n" }, "GenericSideBar": { "AddSteps": "Jobs to be executed on cluster. Please update s3 and py file.\n```js\n[\n{\n\"ActionOnFailure\" : \"CONTINUE\",\n\"Name\" : \"sparkstepTest\",\n\"HadoopJarStep\" : {\n\"Jar\" : \"command-runner.jar\",\n\"Args\" : [\n \"spark-submit\",\n \"s3://YOUR-S3-FOLDER/script3.py\"\n]\n}\n}\n]\n```", "ManagedScaling": "ManagedScalingPolicy example.\n```js\n{\n\"ComputeLimits\" : {\n\"UnitType\" : \"Instances\",\n\"MinimumCapacityUnits\" : 2,\n\"MaximumCapacityUnits\" : 5,\n\"MaximumOnDemandCapacityUnits\" : 5,\n\"MaximumCoreCapacityUnits\" : 3\n}\n}\n```" }, "azureAddVM": { "Name": "The name of the Virtual Machine.", "Subnets": "Select the subnet.", "InstanceType": "The size of the Virtual Machine.", "ImageId": "Choose the Image for the VM. Image should be compatible with the agent platform. Select type as \"Other\" if you don't see desired option in dropdown.", "publicIp": "Choose `Enable` to use a public IP address if you want to communicate with the virtual machine from outside the virtual network.", "Encryption": "Choose to encrypt the given VM.", "diskSize": "Disk size in GB.", "allocationTags": "Allocation tags is the simplest way to constraint containers/pods with hosts/nodes. DuploCloud/Kubernetes Orchestrator will make sure containers will run on the hosts having same allocation tags.", "joinDomain": "Choose Yes to join the VM to the managed domain.", "joinLogAnalytics": "Select Yes to connect Azure virtual machines to Log Analytics.", "Username": "The administrator username for the VM.", "Password": "The administrator password for the VM.", "ComputerName": "Enter to set Computer Name for your Virtul Machine. If not specified, will be set same as the name input provided.", "DiskControllerType": "Default value set is `SCSI`. If you want to set `NVME`, specify the supported Instance Size. Check [here](https://learn.microsoft.com/en-us/azure/virtual-machines/ebdsv5-ebsv5-series#ebdsv5-series-nvme).", "installDuploNativeAgent": "Enable to install agent on the host. Supports only Linux/Ubuntu.", "joinDomainType": "(optional). Select Domain Type for the host.", "timezone": "Select the timezone for the host.", "securityType": "Select `Standard` or `Trusted Launch` security type. Defaults to `Standard`.\nUse Trusted Launch for the security of `Generation 2` virtual machines (VMs). [Supported Sizes](https://learn.microsoft.com/en-us/azure/virtual-machines/trusted-launch#virtual-machines-sizes)\n", "secureboot": "Select to enable Secure Boot for your VM.", "vtpm": "Select to enable virtual Trusted Platform Module (vTPM) for Azure VM.", "encryptAtHost": "Select to enable Encryption at host.", "dynamicFields": { "base64": "Base64 encoded user data.", "name": "Select the Availability Set to associate with this VM.", "volumes": [ { "Name": "Specify the Disk name.", "VolumeId": "Logical unit number (LUN) of the data disk. This value is used to identify data disks within the VM and therefore must be unique for each data disk attached to a VM.", "Size": "Disk size in GB.", "VolumeType": "Choose the Storage Type." } ] } }, "AddStorageClass": { "name": "Storage Class Name.", "provisioner": "Each StorageClass has a provisioner that determines what volume plugin is used for provisioning PVs. This field must be specified.", "reclaimPolicy": "PersistentVolumes that are dynamically created by a StorageClass will have the reclaim policy specified in the reclaimPolicy field of the class, which can be either `Delete`or `Retain`. If no reclaimPolicy is specified when a StorageClass object is created, it will default to `Delete`.\nPersistentVolumes that are created manually and managed via a StorageClass will have whatever reclaim policy they were assigned at creation.\n", "volumeBindingMode": "The Volume Binding Mode field controls when volume binding and dynamic provisioning should occur. When unset, `Immediate` mode is used by default. \nThe `Immediate` mode indicates that volume binding and dynamic provisioning occurs once the PersistentVolumeClaim is created. \nFor storage backends that are topology-constrained and not globally accessible from all Nodes in the cluster, PersistentVolumes will be bound or provisioned without knowledge of the Pod's scheduling requirements. \nThis may result in unschedulable Pods. \nA cluster administrator can address this issue by specifying the `WaitForFirstConsumer` mode which will delay the binding and provisioning of a PersistentVolume until a Pod using the PersistentVolumeClaim is created. PersistentVolumes will be selected or provisioned conforming to the topology that is specified by the Pod's scheduling constraints. \n", "allowVolumeExpansion": "PersistentVolumes can be configured to be expandable. This feature when set to true, allows the users to resize the volume by editing the corresponding PVC object.", "parameters": "Storage Classes have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. \nFor example, the value io1, for the parameter type, and the parameter iopsPerGB are specific to EBS. When a parameter is omitted, some default is used.\nSample Value EFS provisioner \n```yml\nprovisioningMode: efs-ap\nfileSystemId: fs-0f5c4430534311cf1\ndirectoryPerms: \"700\"\ngidRangeStart: \"1000\" # optional\ngidRangeEnd: \"2000\" # optional\nbasePath: \"/dynamic_provisioning\" # optional\n\n## For Azure Storage Account Mount\nresourceGroup: rg-duplo-storage-account # Name of the RG in which the Storage Acc is residing\nsecretName: storage-account-volume-secret # Name of the secret to use (the secret should contain storage account name and key)\nshareName: storage-account-file-share # Name of the Azure file share\n```\n", "storageClassAnnotations": "Kubernetes annotations in key value format. Sample value is like below\n```yml\nkey1: value1\nkey2: value2\n```\n", "storageClassLabels": "Kubernetes labels in key value format. Sample value is like below\n```yml\nkey1: value1\nkey2: value2\n```\n", "allowedTopologies": "When a cluster operator specifies the `WaitForFirstConsumer` volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required, `allowedTopologies` can be specified.\nSample Value Allowed Toplogies\n```yml\n- matchLabelExpressions:\n - key: failure-domain.beta.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b\n```" }, "AzureCreateCosmosDBAccount": { "Name": "Specify a friendly name to identify your Cosmos DB account.", "CapacityMode": "Choose the capacity mode for your database. Choose 'Provisioned throughput' or 'Serverless'.", "PublicAccess": "Enable or disable public network access to the database.", "EnableFreeTier": "Enable the free tier for eligible Cosmos DB accounts.", "KeyBasedAuthentication": "Enable or disable key-based authentication for the account.", "BackupPolicyType": "Select the backup policy type for your account. Choose 'Periodic' or 'Continuous'.", "BackupInterval": "Set the interval for periodic backups in minutes (between 60 and 1440 minutes).", "BackupRetention": "Specify how long backups are retained in hours. Typical value is 168.", "BackupStorageRedundancy": "Choose the redundancy for backup storage ('Geo', 'Zone', or 'Local')." }, "EditGceVmHost": { "Tags": "Enter Network tags.", "Labels": "Specify Labels\n```js\n{\n\"key\" : \"value\"\n}\n```\n", "Metadata": "Configure Compute Engine instance metadata.\n```js\n{\n\"key\" : \"value\"\n}\n```" }, "addSecretParams": { "Name": "Select AWS Secret/SSM Parameter", "objectAlias": "The file name under which the secret will be mounted. When not specified the file name defaults to objectName.", "specificKeys": "You can enable this field to mount key-value pairs from a properly formatted secret value as individual secrets.\nYou can also selectively choose subset of keys from JSON and mount them with different name in the pods." }, "emrserverlsess_add_runjob_sub_form_app_config": { "ApplicationConfiguration": "Specify job configurations to override the default configurations for your applications.\n 1. **Spark:** \n```js\n[{\n \"Classification\": \"spark-defaults\",\n \"Configurations\": [],\n \"Properties\": {\n \"spark.driver.cores\": \"2\",\n \"spark.driver.memory\": \"4g\",\n\t\t\t \"spark.dynamicAllocation.minExecutors\":\"1\"\n }\n }]\n```\n2. **Hive:** \n```js\n[\n {\n \"Classification\": \"hive-site\",\n \"Configurations\": [],\n \"Properties\": {\n \"hive.driver.cores\": \"2\",\n \"hive.driver.memory\": \"4g\",\n \"hive.tez.container.size\": \"8192\",\n \"hive.tez.cpu.vcores\": \"4\"\n }\n }\n ]\n```\n" }, "mq-config-add": { "Name": "Name of the configuration.", "EngineType": "Select the Type of broker engine.", "EngineVersion": "Version of the broker engine.", "AuthenticationStrategy": "Authentication strategy associated with the configuration. `LDAP` is not supported for RabbitMQ engine type." }, "AddAppRunnerServiceAdvanced": { "AutoScalingConfigurationArn": "The Amazon Resource Name (ARN) of an App Runner auto scaling configuration resource to associate with your service. If not provided, App Runner associates the latest revision of a default auto scaling configuration.", "KmsKey": "The ARN of the AWS KMS key used for encryption. App Runner uses this key to encrypt application logs and permanent storage.", "Tags": "Key-value pairs to associate with the App Runner service resource.\n```yaml\n# These will be converted to the AWS Tag format { Key: \"KeyName1\", Value: \"Value1\" }\nKeyName1: Value1\nKeyName2: Value2\n```", "IpAddressType": "The type of IP addresses assigned to your App Runner service. Choose between IPv4 (default) or Dual-stack (supports both IPv4 and IPv6).", "IsPubliclyAccessible": "Whether your App Runner service is publicly accessible through a public endpoint. When set to \"No\", the service is only accessible through a private VPC endpoint.", "PrivateEndpoint": "The endpoint available to access your service when it's not publicly accessible. Only applicable when IsPubliclyAccessible is set to \"No\".", "isOutgoingTrafficPublic": "Whether outgoing network traffic from your App Runner service routes through the public internet (default) or through your VPC.", "VpcConnectorArn": "The ARN of the VPC connector to use when routing outgoing traffic through your VPC. Required when outgoing traffic is not set to public.", "HealthCheckProtocol": "The IP protocol that App Runner uses to perform health checks for your service. Supports HTTP or TCP.", "HealthCheckPath": "The URL path that App Runner calls to perform health checks. Required when Protocol is HTTP.", "HealthCheckInterval": "The time interval, in seconds, between health checks. Value range of 1-20 with a default of 5.", "HealthCheckTimeout": "The time, in seconds, to wait for a health check response before deciding it failed. Value range of 1-20 with a default of 2.", "HealthyThreshold": "The number of consecutive successful health checks required before a service is considered healthy. Value range of 1-20 with a default of 1.", "UnhealthyThreshold": "The number of consecutive failed health checks required before a service is considered unhealthy. Value range of 1-20 with a default of 5." }, "load-balancer-metadata": { "config": "Select the configuration to apply for this load balancer metadata entry, e.g., Http To Https Redirect.", "value": "Enter the value corresponding to the selected configuration, e.g., \"443\"." }, "batchEnv": { "OtherConfigurations": "All Other Create Compute Environment Request parameters as documented [here](https://docs.aws.amazon.com/batch/latest/APIReference/API_CreateComputeEnvironment.html).\nSample Value for customized EC2 configuration.\n```js\n{\n \"ComputeResources\": {\n \"Ec2Configuration\": [\n {\n \"ImageType\": \"ECS_AL2_NVIDIA\"\n }\n ]\n }\n}\n```" }, "targetgroupForm": { "Name": "Specify name of the Target Group.", "targetTypes": "1. *instance*: Targets are EC2 instances\n2. *ip*: Targets are IP addresses\n3. *alb*: Targets are ALB.\n", "port": "Sepcify the port.", "protocol": "Select the protocol.", "healthPath": "Specify Health Check. Determines whether a target is healthy and can receive traffic.", "healthCheckProtocol": "Select Health Check Protocol.", "httpCode": "Specify HTTP Success Code." }, "airflow_add_sub_form_dag": { "S3Bucket": "Choose Amazon S3 storage bucket where DAG is stored.", "DagS3Path": "Specify the relative path to the DAG folder from S3 bucket. For example, `AirflowDags/`", "PluginsS3Path": "Specify the relative path to the plugins.zip file from S3 bucket. For example, plugins.zip. If a relative path is provided in the request, then `Plugins S3Object Version` is required.", "RequirementsS3Path": "Specify the relative path to the requirements.txt file from S3 bucket. For example, `requirements.txt` or `AirflowDags/folder1/requirements.txt`. If a relative path is provided in the request, then `Requirements S3Object Version` is required.", "RequirementsS3ObjectVersion": "Specify the Version Id of the the requirements.txt file version you want to use. For example, lSHNqFtO5Z7_6K6YfGpKnpyjqP2JTvSf.\nS3 Enable Versioning is required to use this in your environment.\n", "PluginsS3ObjectVersion": "Specify the plugin.zip version. For example, lSHNqFtO5Z7_6K6YfGpKnpyjqP2JTvSf. S3 Enable Versioning is required to use this in your environment.", "StartupScriptS3Path": "Specify a shell (.sh) script to be executed during startup on Apache Airflow components. You need to specify the relative path to the script hosted in S3 Bucket, for example, `AirflowDags/startup.sh`.\n", "StartupScriptS3ObjectVersion": "Specify the version ID of the startup shell script from S3 Bucket. For example, YVu1x62otML9W8TQgCjm5iXWBtrGL3HP.\nS3 Enable Versioning is required to use this in your environment." }, "AddSNS": { "Name": "The name of the SNS topic.", "KmsKeyId": "Select the KMS key for encryption.", "FifoTopic": "Enable to create a SNS FIFO topic.", "ContentBasedDeduplication": "When selected, enables content-based deduplication for FIFO topics." }, "AddSecretProviderClass": { "name": "Secret Provider Class Name", "provider": "Secret Provider", "parameters": "The parameters section contains the details of the mount request and contain one of the three fields:\n* objects: This is a string containing a YAML declaration (described below) of the secrets to be mounted.\n ```yaml\n objects:\n - objectName: \"MySecret\"\n objectType: \"secretsmanager\"\n ```\nThe objects field of the SecretProviderClass can contain the following sub-fields:\n* objectName: This field is required. It specifies the name of the secret or parameter to be fetched. For Secrets Manager this is the [SecretId](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html#API_GetSecretValue_RequestParameters) parameter and can be either the friendly name or full ARN of the secret. For SSM Parameter Store, this must be the [Name](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameter.html#API_GetParameter_RequestParameters) of the parameter and can not be a full ARN.\n* objectType: This field is optional when using a Secrets Manager ARN for objectName, otherwise it is required. This field can be either \"secretsmanager\" or \"ssmparameter\".\n* objectAlias: This optional field specifies the file name under which the secret will be mounted. When not specified the file name defaults to objectName.\n* jmesPath: This optional field specifies the specific key-value pairs to extract from a JSON-formatted secret. You can use this field to mount key-value pairs from a properly formatted secret value as individual secrets.\n If you use the jmesPath field, you must provide the following two sub-fields:\n * path: This required field is the [JMES path](https://jmespath.org/specification.html) to use for retrieval\n * objectAlias: This required field specifies the file name under which the key-value pair secret will be mounted. \n", "secretObjects": "In some cases, you may want to create a Kubernetes Secret to mirror the mounted content. Use the optional secretObjects field to define the desired state of the synced Kubernetes secret objects. The volume mount is required for the Sync With Kubernetes Secrets\nYou can find more information about 'secretObjects' [here](https://secrets-store-csi-driver.sigs.k8s.io/topics/sync-as-kubernetes-secret.html)\n", "secretProviderClassAnnotations": "Key Value pair of annotations to be added to SecretProviderClass\nSample Value Can be as bellow\n```yaml\nkey1: value1\nkey2: value2\n```\n", "secretProviderClassLabels": "Key Value pair of labels to be added.\nSample Value can be as below\n```yaml\nkey1: value1\nkey2: value2\n```" }, "sqlDatabase": { "Name": "Enter the name of the instance. Use lowercase letters, numbers, or hyphens. Start with a letter.", "DatabaseVersion": "Select the MySQL, PostgreSQL or SQL Server version to use.", "Tier": "Select from the list of all available machine types (tiers) for the database.", "RootPassword": "Specify the root password. Required for MS SQL Server.", "DataDiskSizeGb": "Specify the size of data disk, in GB.", "Labels": "Specify labels in below format\n```js\n{\n \"key\" : \"value\"\n}\n```\n", "Edition": "Select from the supported Edition." }, "appwebapp": { "Name": "Enter a unique name for the web app.", "AppServicePlan": "Select the App Service Plan to host the web app.", "Publish": "Choose the publish mode for the web app, e.g., 'Code' or 'Docker'.", "EnvironmentalVariables": "Enter environment variables as an array of name/value pairs.", "HealthCheckPath": "Specify the health check path for the web app (e.g., /ping or /healthy)." }, "cloudfront-add": { "Name": "Any friendly name to the cloudfront distribution", "certificate": "Certificate ARN to be used by the distribution. Make sure the certificate is in the us-east-1 region.", "DefaultRootObject": "Default root object object that will be returned while accessing root of the domain. Example: index.html. Should not start with \"/\"", "defaultCachePolicyId": "Default Cache policy applied to the Distribution\n [Click Here](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-managed-cache-policies.html) to know more about Amazon Managed policy\n You can also create your custom cache policy and provide the ID of the policy\n ", "defaultTargetOriginId": "Target Origin to apply the cache policy. This Origin will be used as the default (\"*\") source by the cloudfront", "itemAliasName0": "The number of CNAME aliases, that you want to associate with this distribution. End User will be able to access the disribution using this URL", "itemDomainName0": "Origin Domain Name where the content will be fetched by the CDN. It can be S3 bucket or custom URL (api service url etc)", "itemId0": "Unique Id for the origin. This Id will be refered in default cache behavior and custom cache behavior.", "itemPath0": "Path that will suffixed to the origin domain name (url) while fetching content.\n For S3: If the content that need to be served is under prefix static. You should enter \"/static\" in path.\n For custom url: If all the API have a prefix like v1. You should enter \"/v1\" in path ", "customCachePolicyId0": "Cache policy applied to the this custom path\n [Click Here](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-managed-cache-policies.html) to know more about Amazon Managed policy\n You can also create your custom cache policy and provide the ID of the policy\n ", "customCachePathPattern0": "The pattern (for example, images/*.jpg ) that specifies which requests to apply the behavior to. When CloudFront receives a viewer request, the requested path is compared with path patterns in the order in which cache behaviors are listed in the distribution.", "customTargetOriginId0": "Target Origin to apply the cache policy. This Origin will be used as source by the cloudfront if the request URL matches the path pattern" }, "airflow_add_sub_form_basics": { "Name": "The name of the Apache Airflow Environment.", "AirflowVersion_other": "Airflow version of your environment, will be set by default to the latest version that MWAA supports.", "WeeklyMaintenanceWindowStart_day": "Specify the start date for the weekly maintenance window.", "WeeklyMaintenanceWindowStart_time": "Specify the time for the weekly maintenance window.", "KmsKey": "Choose to encrypt.", "WebserverAccessMode_bool": "Enable if you want to access the webserver over the internet." }, "addContainerAppEnvironment": { "Name": "Enter a unique name for the Container App Environment.", "WorkloadProfiles0": "Select the workload profile to define the compute characteristics for this environment.", "workloadName": "Enter a name for the individual Workload Profile (max 15 chars).", "InstanceMinCount": "Specify the minimum number of instances for this profile.", "InstanceMaxCount": "Specify the maximum number of instances for this profile.", "appcontainerappenvInstanceMinCount": "Specify the minimum number of instances for this profile.", "appcontainerappenvInstanceMaxCount": "Specify the maximum number of instances for this profile.", "appLogDestinations": "Select where application logs from this environment should be sent.", "SubnetId": "Select the subnet where the Container App Environment will be deployed." }, "publicIpPrefixForm": { "Name": "Specify the name of the Public IP Prefix resource.", "PrefixLength": "Select from the Public IP prefix sizes.", "ResourceType": "Select `ApplicationGateway`. A public IP from a prefix will be used for your gateway.\nIf not selected, the Public IP Prefix is created with type Other. This will be used for Virtual Machine resources." }, "firestoreDatabase": { "Name": "Specify the Name for your database.", "type": "Select the type of the database.", "locationId": "Select the location of the database.", "pointInTimeRecoveryEnablement": "Choose to enable the PointInTimeRecovery feature on this database. If `POINT_IN_TIME_RECOVERY_ENABLED` is selected, reads are supported on selected versions of the data from within the past 7 days. Default value is `POINT_IN_TIME_RECOVERY_DISABLED`.", "deleteProtectionState": "Select the State of delete protection for the database. When delete protection is enabled, this database cannot be deleted. Default is set to `DELETE_PROTECTION_DISABLED`." }, "cert": { "SetAsActive": "Enable to create to Certificate in Active state." }, "addStorageAccount": { "Name": "Enter a unique name for the Azure Storage Account. This name must be globally unique and cannot be changed after creation.", "name": "Enter a unique name for the Azure Storage Account. This name must be globally unique and cannot be changed after creation.", "allowBlobPublicAccess": "Enable this option to allow public (anonymous) read access to blobs. Use with caution, as this can expose data publicly." }, "emr_serverless_add_sub_form_configure": { "AutoStartConfiguration_Enabled": "Enable the configuration for an application to automatically start on job submission.", "AutoStopConfiguration_Enabled": "Enable the configuration for an application to automatically stop after a certain amount of time being idle.", "IdleTimeoutMinutes": "The amount of idle time in minutes after which your application will automatically stop." }, "addHelmRelease": { "Name": "Provide Name for the Helm Chart.", "ReleaseName": "Provide release name to identify specific deployment of helm chart.", "ChartName": "Provide unique name to specify the name of the chart.", "ChartVersion": "Specify Helm Chart Version.", "ChartReconcileStrategy": "Defaults to `Chart Version`. No new chart artifact is produced on updates to the source unless the version is changed in HelmRepository.\nUse `Revision` to produce new chart artifact on change in source revision.", "SourceType": "Set as default HelmRepository.", "SourceName": "Select the Helm Repo configured from the list.", "Values": "Specify in yaml format. This field allows users to customize Helm charts. \nExample\n```yaml\n\n replicaCount: 2\n serviceAccount:\n create: true\n\n```", "ValuesFrom": "Specify the reference of ConfigMap and Secret resources from which to take values.\nExample\n```yaml\n\n- kind: ConfigMap\n name: nginx-values\n valuesKey: values.yaml\n- kind: Secret\n name: nginx-secrets\n valuesKey: secret-values.yaml\n\n```", "Labels": "Specify labels\n```yaml\n\nteam: platform\napp.duplocloud.net/app-name: myapp-prod # label to group K8s resource by appname\n\n```\n\n " }, "startInstanceRefresh": { "InstanceReplacementMethod": "Select how instances should be replaced during the refresh.", "MinHealthyPercentage": "Specify the minimum percentage of healthy instances that must be maintained during the refresh.", "MaxHealthyPercentage": "Specify the maximum percentage of healthy instances allowed during the refresh. This field may be auto-calculated or limited depending on your environment.", "Instance Warmup": "Enter the time in seconds that a new instance should be considered 'warming up' before it is counted as healthy.", "ShowLaunchTemplate": "Select this option to update the launch template associated with the instances being refreshed.", "LaunchTemplateVersion": "Specify the version of the launch template to use for the instance refresh." }, "databrick": { "Name": "Enter Databricks workspace name.", "Tier": "Choose between Standard or Premium. View Pricing details [here](https://azure.microsoft.com/en-us/pricing/details/databricks/).", "DisablePublicIP": "If you set this to `Disabled`, no user access is permitted from the public internet.\nIf you set this to `Enabled`, users and REST API clients on the public internet can access Azure Databricks\n", "VnetIntegration": "Choosing `Enabled` allows you to deploy an Azure Databricks workspace in your virtual network.", "PrivateSubnet": "Use the default private subnet name.", "PublicSubnet": "Use the default public subnet name." }, "app-add-app-service-plan": { "Name": "Enter a unique name for the App Service Plan.", "tier": "Select the pricing tier for the App Service Plan.", "serversize": "Choose the size of the server hosting the plan.", "platform": "Select the operating system platform for this plan.", "noofinstances": "Specify the number of instances to run in this plan." }, "awsBillingAlert": { "Disable": "Enable or Disable the Billing Alert. If disabled, no alert will be triggered.", "Previous Month Spend": "If selected, the threshold will be set to the monthly spend of the previous month. If not, the threshold must be set manually.", "BudgetAmount": "Specify the custom threshold which will be used to compute the alert. Enter a positive dollar amount.", "AlertTrigger": "Select the percentage of the threshold above which the alert will be triggered.", "AlertTriggerOther": "Specify the custom trigger value. Enter a positive integer percent value.", "EmailNotifications": "Select the admin users to receive the billing alert email." }, "AddElasticSearch": { "Name": "Name of the domain.", "Version": "Select the version of Elasticsearch to deploy.", "DataInstanceType": "Select the Instance type of data nodes in the cluster.", "DataInstanceCount": "Provide Data Instance Count.", "VolumeSize": "Provide Storage in Gb.", "DedicatedMasterType": "Select the Instance type of the dedicated master nodes in the cluster.", "DedicatedMasterCount": "Provide the Number of dedicated master nodes in the cluster.", "Kms": "Select the key to encrypt the Elasticsearch domain with.", "RequireSSL": "Enable if Https is required.", "UseLatestTlsCipher": "Select to use latest TLS Cipher.", "EnableNodeToNodeEncryption": "Select to enable node-to-node encryption.", "WarmEnabled": "Enable UltraWarm storage.", "WarmType": "Select the Instance type for the OpenSearch cluster's warm nodes.", "WarmCount": "Specify the number of warm nodes in the cluster. Valid values are between 2 and 150.", "ColdStorageOptions": "Select to enable cold storage for an OpenSearch domain." }, "addTimestreamTable": { "TableName": "The name of the Timestream table.", "MemoryStoreRetentionHours": "The duration for which data must be stored in the memory store. Minimum value of 1. Maximum value of 8766.", "MagneticStoreRetentionPeriodInDays": "The duration for which data must be stored in the magnetic store. Minimum value of 1. Maximum value of 73000.", "EnableMagneticStorageWrites": "Select to enable magnetic store writes.", "S3BucketError": "Select S3 location for storing the error logs.", "S3FolderError": "Specify the S3 folder location name.", "KmsKeyId": "Select the KMS key for the S3 location." }, "awsAccountSecurity": { "EnableSecurityHub": "Enable AWS Security Hub in any region where there is infrastructure managed by DuploCloud.", "EnableAllSecurityHubRegions": "Enable AWS Security Hub in all AWS regions managed by DuploCloud.", "SecurityHubMembershipType": "Enable or disable multi-account AWS Security Hub:\n\n- **Local**: Disable any multi-account administration by this account in AWS Security Hub.\n- **Centralized (in this account)**: Allow this account to manage other accounts in AWS Security Hub.\n- **Centralized (in another account)**: Allow this account to be managed by another DuploCloud in AWS Security Hub. Disable any multi-account administration by this account in AWS Security Hub.\n", "SecurityHubAdminAccount": "The AWS Account ID of the Security Hub administrator account.", "ConfigLogBucketType": "Enable or disable cross-account AWS Config logging:\n\n- **Local**: AWS Config will log to an S3 bucket owned by this account.\n- **Centralized (in this account)**: AWS Config will log to an S3 bucket owned by this account. Allow AWS Config in other accounts to log to this same bucket.\n- **Centralized (in another account)**: AWS Config will log to an S3 bucket owned by another account.\n", "ConfigLogBucketName": "The S3 bucket name that AWS Config will log to.", "EnableGuardDuty": "Enable AWS Guard Duty in all AWS regions managed by DuploCloud.", "EnableInspector": "Enable AWS Inspector in any region where there is infrastructure managed by DuploCloud.", "EnableAllInspectorRegions": "Enable AWS Inspector in all AWS regions managed by DuploCloud.", "IgnoreDefaultEbsEncryption": "Normally, DuploCloud enables EBS Default Encryption for all regions in which you deploy infrastructure.\n\nWhen this box is checked, DuploCloud will ignore the EBS Default Encryption settings when creating any new infrastructure.\n\nHowever, you can still edit the `EBS Encryption by Default` setting for your infrastructure - to enable EBS encryption by default, for the entire AWS region.\n", "EnablePasswordPolicy": "Enable an account-level IAM User password policy:\n\n- Minimum password length is 14 characters\n- Require at least one uppercase letter from Latin alphabet (A-Z)\n- Require at least one lowercase letter from Latin alphabet (a-z)\n- Require at least one number\n- Require at least one non-alphanumeric character (! @ # $ % ^ & * ( ) _ + - = [ ] { } | ')\n- Password expires in 90 day(s)\n- Allow users to change their own password\n- Remember last 24 password(s) and prevent reuse\n", "EnableCloudTrail": "Enable a multi-region CloudTrail for this AWS account.\n\nEnabling this feature will tell DuploCloud to:\n\n- Create and manage a multi-region CloudTrail in this AWS account\n- Create a CloudWatch log group named `/cloudtrail/duplo` that receives CloudTrail events\n- Create and manage an S3 bucket that receives CloudTrail log files\n", "CloudTrailLogBucketType": "Enable or disable cross-account AWS CloudTrail logging:\n\n- **Local**: AWS CloudTrail will log to an S3 bucket owned by this account.\n- **Centralized (in this account)**: AWS CloudTrail will log to an S3 bucket owned by this account. Allow AWS CloudTrail in other accounts to log to this same bucket.\n- **Centralized (in another account)**: AWS CloudTrail will log to an S3 bucket owned by another account.\n", "CloudTrailLogBucketAccount": "The AWS Account where AWS CloudTrail S3 logs will reside in.", "CloudTrailLogBucketName": "The S3 bucket name that AWS CloudTrail will log to.", "CloudTrailLogBucketKmsKeyId": "The KMS Key ID that AWS CloudTrail S3 logs will use.", "EnableVpcFlowLogs": "Enable VPC flow logs for all VPCs created by DuploCloud.", "DeleteDefaultVpcs": "Delete default VPCs in all AWS regions managed by DuploCloud.", "DeleteDefaultNaclRules": "Delete default NACL rules for all VPCs created by DuploCloud.", "RevokeDefaultSgRules": "Revoke default Security Group rules for all VPCs created by DuploCloud.", "EnableCisCloudTrailCloudWatchAlarms": "Enables CIS CloudTrail CloudWatch Alarms in all AWS regions managed by DuploCloud.\n\nEnabling this feature and specifying an email address to receive alerts will satisfy AWS CIS controls 3.1 through 3.14.\n", "CisCloudWatchAlarmNotificationEmail": "Specifies an email address that should receive the CIS CloudTrail CloudWatch Alarms.\n\nIn order to satisfy AWS CIS controls 3.1 through 3.14, the email recipient must first confirm the subscription to receive the alerts from AWS." }, "backupRetentionPeriod": { "BackupRetentionPeriod": "Specify in days to save automated backups of your DB. Valid values 1-35." }, "AddNamespace": { "Name": "Specify the name of the ServiceBus Namespace resource.", "SkuTier": "Select the Pricing Tier.", "Version": "Select a specific TLS version for your namespace.", "AutoGrow": "Enable/ Disable SAS authentication for the Service Bus namespace." }, "addresourcequotaform": { "resourcequotaname": "Enter quota name.", "memoryrequest": "Max total memory requested across all pods. Example `4Gi`.", "memorylimit": "Max total memory limit across all pods. Example `8Gi`.", "cpurequest": "Max total CPU requested across all pods. Example `2`.", "cpulimit": "Max total CPU limit across all pods. Example `4`.", "dataYaml": "```yaml\nconfigmaps: '200'\npods: '2'\n```\n", "scopeSelectorYaml": "scopeSelector allows you to apply quotas only to specific subsets of resources\u2014like terminating pods, non-terminated pods.\n```yaml\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values:\n - middle\n```" }, "batchSchedulingPolicy": { "Name": "Up to 128 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed.", "Tags": "A tag is a label that you assign to an AWS resource. Each tag consists of a key and an optional value. You can use tags to search and filter your resources or track your AWS costs\nSample Value for **Tags** is as below\n```js\n{\n \"Key\" : \"Value\"\n}\n\n```\n", "ShareDistribution": "An array of SharedIdentifier objects that contain the weights for the fair share identifiers for the fair share policy. Fair share identifiers that aren't included have a default weight of 1.0.\n[Click Here](https://docs.aws.amazon.com/batch/latest/APIReference/API_ShareAttributes.html) to know more about **ShareDistribution**.\nSample Value for the Other Job properties to set **ContainerOverrides** is as below\n```js\n[\n {\n \"ShareIdentifier\": \"SomeIdentifier\",\n \"WeightFactor\": 0.1\n }\n]\n```" }, "AddBYOHHost": { "Name": "A friendly name to BYOH host.", "DirectAddress": "IPv4 address to which DuploCloud will use to communicate with agent installed on your host.", "FleetType": "Fleet type represents the type of container orchestrator running your Host.\n1. **Linux Docker/Native:** Select this option if operating system running on your host is Linux.\n2. **Docker Windows:** Select this option if operating system running on your host is Windows\n3. **None:** Select this option if no agent/container orchestrator is running on the host.\n", "Username": "Username to login to your host. This is an optional field.", "Password": "Password to login to your host. This is an optional field.", "PrivateKey": "Private Key to login to your host using SSH. This is again an optional field. User can either specify `Password` or `Private Key`." }, "S3bucketReplication": { "Name": "Specify rule name.", "selectTenant": "Select the Tenant.", "destinationS3": "Select the destination S3 bucket for replication", "Priority": "Specify the Priority associated with the rule. Priority must be unique between multiple rules.", "DeleteMarkerReplication": "Enable/Disable if delete markers needs be replicated.", "changeStorage": "Select to set the Storage Class used to store the object.", "StorageClass": "Select the Storage Class from the supported list to store the objects." }, "batchJob": { "OtherJobProperties": "All Other Submit Job Definition Request parameters as documented\n [here](https://docs.aws.amazon.com/batch/latest/APIReference/API_SubmitJob.html#Batch-SubmitJob-request-containerOverrides).\nSample\n Value for the Other Job properties to set **ContainerOverrides** is as below\n```js\n{\n\n \"ContainerOverrides\" : {\n \"Command\" : [ \"echo\", \"hiii\"],\n \"Environment\":\n [\n {\n \"name\": \"KEY1\",\n \"value\": \"123\"\n\n }\n ]\n }\n\n}\n```" }, "addTimestreamDatabase": { "DatabaseName": "The name of the Timestream database.", "KmsKeyId": "Select the KMS key to be used to encrypt the data stored in the database." }, "AddIngressRule": { "serviceName": "Name of the kubernetes service which Ingress will use as backend to serve the request.\nThe service must already be configured as a LoadBalancer, NodePort, or ClusterIP.\nSee the documentation [here](https://docs.duplocloud.com/docs/aws/quick-start/step-6-create-a-load-balancer)\n", "port": "Backend port on the Kubernetes service that the Ingress will route traffic to.\nThis value must be provided as a string (for example, \"443\"), not a number..\n", "host": "If a host is provided (for e.g. example.com, foo.bar.com), the rules apply to that host.", "path": "Specify the path (for e.g. /api /v1/api/)for path-based routing. If a host is specified, both the host and path must match the incoming request.", "pathType": "Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:\n\n1. *ImplementationSpecific*: Matching behavior is determined by the IngressClass and may behave like Prefix or Exact path types.\n2. *Exact*: Matches the URL path exactly and with case sensitivity.\n3. *Prefix*: Matches based on a URL path prefix split by /. Matching is case sensitive and done on a path element by element basis. A path element refers to the list of labels in the path split by the / separator. A request is a match for path p if every p is an element-wise prefix of p of the request path. \n" }, "AddServiceBasic": { "ServiceName": "Enter a unique name for your service. Must start with a letter and can include letters, numbers, and hyphens, and be 2-45 characters.", "cloud": "Select the cloud provider where this service will run (AWS, Azure, GCP).", "agentPlatforms": "Select the application platform for the service. For example, Linux Docker/Native.", "ImageName": "Enter the Docker image name for this service, including repository and tag if applicable.", "AllocationTag": "Enter a tag that matches a corresponding allocation tag on a host. \nOnly hosts with a matching allocation tag will run this service. \nFor example, if you want the service to run only on hosts tagged 'web', \nset the allocation tag here to 'web'. If no host has the tag, the service \nwill not be deployed.\n", "IsDaemonset": "Enable this option to run one instance of the service on every host \nin the selected tenant. This ensures the service is deployed across all hosts.\n", "IsAnyHostAllowed": "If \"Yes\", the service can run on any host in the same plan as\nthe selected tenant.\n", "asgName": "Selecting an Auto Scaling Group (ASG) ensures that the replicas of\nthis service match the nodes in the ASG.\nAny value set for replicas in this form will be overridden.\n", "ReplicaCollocation": "When enabled, multiple replicas of the same service can be allocated to same node. When disabled, each replica needs to be on a separate host (a single host can still hold containers from different services).", "LBSyncedDeployment": "Enables Zero Downtime Updates. When enabled, each replica is drained from the load balancer before being updated and then re-added afterward. This ensures zero downtime but slows the rollout. When disabled, updates occur sequentially without load balancer coordination, which may cause brief service glitches (5\u201320 seconds).", "LivenessProbe": "Define the liveness probe for the container. For example:\n```js\n{\n \"failureThreshold\":3,\n\n \"httpGet\":{\n \"path\":\"/healthz/live\",\n \"port\":80,\n\n \"scheme\":\"HTTP\"\n },\n \"periodSeconds\":10,\n \"successThreshold\":1,\n\n \"timeoutSeconds\":5\n}\n```\n", "ReadinessProbe": "Define the readiness probe for the container. For example:\n```js\n{\n \"failureThreshold\":3,\n\n \"httpGet\": \n {\n \"path\":\"/healthz/ready\",\n \"port\":80,\n\n \"scheme\":\"HTTP\"\n },\n \"periodSeconds\":10,\n \"successThreshold\":1,\n\n \"timeoutSeconds\":5\n}\n```\n", "securityContext": "Define the security context for the service. For example:\n```js\n{\n \"Capabilities\":\n {\n \"Add\": [\n \"NET_BIND_SERVICE\"\n ],\n \"Drop\":\n [\n \"ALL\"\n ]\n },\n \"ReadOnlyRootFilesystem\": false,\n\n \"RunAsNonRoot\": true,\n \"RunAsUser\": 1000\n}\n```\n", "k8sPodConfig": "Pass generic Kubernetes Pod configurations, such as resource limits or a restart policy. As an example: \n```js\n{\n \"RestartPolicy\":\n \"Always\",\n \"envFrom\": [\n {\n \"configMapRef\": {\n \"name\":\n \"api-configs\"\n }\n },\n {\n \"secretRef\": {\n\n \"name\": \"api-secrets\"\n }\n },\n {\n \"configMapRef\":\n {\n \"name\": \"api-db\",\n \"optional\": true\n }\n\n }\n ],\n \"resources\": {\n \"limits\": {\n \"memory\":\n \"1Gi\"\n },\n \"requests\": {\n \"memory\": \"384Mi\"\n\n }\n }\n}\n```\n", "podSecurityContext": "Define security settings to apply to all containers in the Pod, \nsuch as user/group IDs and file system permissions. For example: \n```js\n{\n\"FsGroup\":1001,\n\"RunAsUser\":1001\n}\n```\n", "EnvVariablesK8s": "Enter environment variables to be passed to the containers in the YAML format. For example:\n```yaml\n---\n- Name: DB_HOST\n Value: abc.com\n- Name: DB_USER\n Value: myuser\n- Name: DB_PASSWORD\n Value: mypassword\n# using secrets\n- Name: SECRET_USERNAME\n ValueFrom:\n secretKeyRef:\n name: mysecret\n key: username\n# using ConfigMap data \n- Name: LOG_LEVEL\n ValueFrom:\n configMapKeyRef:\n name: env-config\n key: log_level\n```", "NetworkMode": "Select 'Host Network' to share the host's networking stack. Container ports are accessible directly on the host without explicit port mapping.", "ReplicationStrategy": "Choose how replicas are managed:\n 1. **Static**: This option can be used to set a fixed count for the replicas.\n 2. **Daemonset**: If this option is selected, pods run on every host in the tenant.\n 3. **Horizontal Pod Autoscaler**: Automatically scales replicas up or down based on metrics like CPU Usage, Memory Usage, Ingress requests per second, etc. For more information, [Click Here](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/).\n", "HPAConfig": "Enter horizontal pod autoscaler specifications as documented [here](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/). Sample values are shown below:\n```yaml\nmaxReplicas: 5\nmetrics:\n - resource:\n name: cpu\n target:\n averageUtilization: 80\n type: Utilization\n type: Resource\nminReplicas: 2\n```", "envVariables": "Pass environment variables in json format. Refer to the sample below.\n```js\n{\n \"discovery.type\": \"single-node\",\n \"plugins.security.disabled\" : \"true\",\n \"compatibility.override_main_response_version\" : \"true\"\n}\n```\n", "dockerNetwork": "Optionally, specify additional network settings beyond the default container network.", "ReplicaPlacement": "Select how replicas are placed.\n1. **First Available**: Replicas prefer, but are not required to be scheduled on different hosts.\n2. **Place on Different Hosts**: Replicas are required to be placed on different hosts.\n3. **Spread Across Zones**: Replicas are required to be spread across availability zones and prefer, but are not required to be scheduled on different hosts.\n4. **Different Hosts and Spread Across Zones**: Replicas must be on separate hosts and spread across zones.\n", "TolerateSpotInstance": "Select to run pods in Spot Instance. Relevant tolerations are added automatically in 'Other Container Config'.", "Replicas": "Specify the number of replicas for the service.", "ForceStatefulSet": "Enable this option to create the service as a StatefulSet.", "AppName": "Optionally, enter an App Name to group services. This helps filter services belonging to the same app." }, "emrserverlsess_add_runjob_sub_form_script": { "ScriptQueryInitFileBucket": "Choose S3 Bucket where the initialization script is stored.", "ScriptQueryInitFile": "Specify S3 folder location of the initialization script that you can use to initialize tables prior to running the Hive script. Example: `samples/hive-scripts/create_table.sql`\n", "ScriptQueryBucket": "Choose S3 bucket where the query files are stored.", "ScriptQueryFile": "Specify the S3 folder location of the script which needs to be executed in the job.\nExample: `samples/hive-scripts/extreme_weather.sql`\n", "ScriptS3Bucket": "Choose S3 Bucket location S3 Bucket where scripts are stored.", "ScriptS3BucketFolder": "Specify the S3 folder location where scripts are stored. Example: `samples/spark-scripts/wordcount.py`\n", "ScriptArguments": "Specify array of arguments passed to your main JAR or Python script. Each argument in the array must be separated by a comma. \n Example:\n ```js\n [\"s3:///wordcount_output\", \"40000\"}]\n ```\n", "ScriptSubmitParameters": "Specify additional configuration properties for your each job.\n 1. **Spark:** \n Example:\n `--conf spark.executor.cores=1 --conf spark.executor.memory=4g --conf spark.driver.cores=1 --conf spark.driver.memory=4g --conf spark.executor.instances=1 \n `\n 2. **Hive:** \n Example: \n `--hiveconf hive.exec.scratchdir=s3:///hive/scratch --hiveconf hive.metastore.warehouse.dir=s3:///hive/warehouse\n `\n" }, "K8sJobAdvancedForm": { "specOther": "Add Job Spec in yaml format.\n```yaml\ncompletions: 3\nmanualSelector: true\nparallelism: 3\ntemplate:\n spec:\n dnsPolicy: ClusterFirst\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\n volumes:\n - name: my-volume\n persistentVolumeClaim:\n claimName: my-pvc-claim\n```\n", "medataAnnotations": "Add Job Metadata Annotations.\nFollow below format:\n```yaml\nannotation_name: value1\nannotation_type: value2\n```\n", "medataLabels": "Add Job Metadata Labels.\n```yaml\nlabel1: value1\nlabel2: value2\n```" }, "VantaControl": { "EnableVanta": "Enable Vanta monitor. GuardDuty will be enabled by default when this setting is enabled.", "tenant": "Select the tenant you want to configure the Vanta monitoring.", "guardDutyEmail": "Enter email id. SNS Topic subscription will be sent to this email id.", "description": "Specify the description.", "production": "When Production is set as True, SNS Topic will be configured in the Infrastructure region fo the Tenant.", "contains": "Enables User Data.", "owner": "Enter email of the owner." }, "AddPostgreSQLFlexi": { "Name": "Specify Server Name.", "SkuTier": "Select SKU Tier.", "AdminUsername": "Specify the primary administrator username.", "AdminPassword": "Specify Server Password.", "SkuName": "Select Hardware.", "StorageSizeGB": "Storage in GB.", "Version": "Select the version of PostgreSQL Flexible Server to use.", "BackupRetentionDays": "Specify The backup retention days for the PostgreSQL Flexible Server.", "GeoRedundantBackup": "Select to enable Geo-Redundant backup.", "SubnetId": "Select the Subnet.", "HighAvailability": "Select High Availability." }, "AddIngress": { "name": "Name for your ingress object.", "ingressClassName": "Select ingress controller name.", "dNSPrefix": "Provide DNS prefix.", "visibility": "Visibility can be used to manage the accessibility of services exposed by Ingress load balancer.\n*Internal Only*: Services will be accessible within tenant and to other tenants only if allowed by security rules.\n*Public*: Services will be accessible over internet.\n", "certificateArn": "Select certificate ARN to expose services over HTTPS.", "ingressRules": "Ingress rules specifications as documented [here](https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules)\nClick on the `Add Rule` button to add the rules for the kubernetes services created using DuploCloud.\n```\n", "ingressAnnotations": "List of Key Value pairs to annotate the Ingress Object. \nAnnotations are used for controlling the behavior of the services exposed by the Ingress. \nEach ingress controller defines the set of annotations which can be used to control the behavior.\nRefer these links: \n 1. **AWS ALB ingress controller** [here](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/annotations/)\n 2. **Azure Application Gateway Ingress** [here](https://azure.github.io/application-gateway-kubernetes-ingress/annotations/#list-of-supported-annotations)\n 3. **GCP Ingress** [here](https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress#ingress_annotations)\n", "httpPort": "HTTP Listener Port. If you dont want to exposed your services over HTTP, make it blank.", "httpsPort": "HTTPS Listener Port. HTTPS Listener port can be specified only when SSL certificate ARN is specified.", "ingressLabels": "Key Value pair of labels to be added to Ingress\nSample Value Can be as bellow\n```yaml\n\nteam: platform\napp.duplocloud.net/app-name: myapp-prod # label to group K8s resource by appname\n\n```\n", "portOverride": "Select port to override. This field allows to configure frontend listener to use different ports other than 80/443 for http/https.", "targetType": "# Specifies how to route traffic to pods. You can choose between `instance` and `ip`.\n\n1. **instance**\nThis mode will route traffic to all ec2 instances within cluster on NodePort opened for your service. Service must be of type `NodePort` or `LoadBalancer` to use instance mode.\n\n2. **ip**\nThis mode will route traffic directly to the pod IP. Network plugin must use secondary IP addresses on ENI for pod IP to use ip mode. e.g. amazon-vpc-cni-k8s. \nService can be of any type like `ClusterIP` or `NodePort` or `LoadBalancer` to use instance mode.\nIP mode is required for sticky sessions to work with Application Load Balancers. \n", "EnableHttpToHttps": "Enable to redirect HTTP to HTTPS.", "tlsHosts": "Specify TLS Hosts. Multiple Hosts can be added comma-separated.`example.com, api.example.com, dashboard.example.com.` [Details](https://kubernetes.io/docs/concepts/services-networking/ingress/#tls).", "tlsSecretName": "Specify the secret name that contains a TLS private key and certificate." }, "addAcceleratorForm": { "acceleratorCount": "Specify the number of GPUs.", "acceleratorType": "Specify Accelerator Type(GPU Type). Google does not offer every instance/GPU combination in every region/zone. \nEnter the compatible type based on the instance type and zone. Example: `nvidia-tesla-a100` Accelarator Type is supported for Zone `us-west4-b` for Instance Type `a2-highgpu-1g`.\nFor GPU regions and zone availability, click [here](https://cloud.google.com/compute/docs/gpus/gpu-regions-zones).\nMore details on running GPUs in GKE Standard Pool, click [here](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus).\n", "gpuPartitionSize": "Specify GPC Partition Size. Example- `1g.5gb`. Refer [UserGuide](https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).", "maxSharedClientsPerGpu": "Specify maximum shared client.", "gpuSharingStrategy": "Select GCP Sharing Strategy.", "gpuDriverInstallationConfig": "Select the GPU Driver Installation Strategy\n1. **Default:** Installs the default driver version for your GKE version.\n2. **Latest:** Installs the latest available driver version for your GKE version. Available only for nodes that use Container-Optimized OS.\n3. **Disable GPU driver auto installation:** Disables GPU driver auto installation and needs manual installation.\n4. **Do not install any GPU driver:** Skips automatic driver auto installation." }, "thing": { "Name": "The name of the thing.", "IotCertificateArn": "Select IoT certificate.", "ThingTypeName": "Select the thing type name. Defaults to `DuploDefault`", "Attributes": "Map of attributes of the thing. Example\n```json\n{\"First\":\"examplevalue\"}\n```" }, "jobDefinition": { "OtherContainerProperties": "All Other Container Properties for the Register Job Definition Request parameters as documented [here](https://docs.aws.amazon.com/batch/latest/APIReference/API_RegisterJobDefinition.html) only for the **ContainerProperties** section.\nSample Value for the Other Container properties to override **Command** is as below\n```js\n{\n \"Command\" : [\"sleep\", \"5\"]\n}\n```\n", "OtherJobProperties": "All Other Job Properties for the Register Job Definition Request parameters as documented [here](https://docs.aws.amazon.com/batch/latest/APIReference/API_RegisterJobDefinition.html) except the **ContainerProperties** section.\nSample Value for the Other Job properties to set **RetryStrategy** is as below\n```js\n{\n \"RetryStrategy\": {\n \"EvaluateOnExit\": [\n {\n \"Action\": \"EXIT\",\n \"OnExitCode\": \"1*\",\n \"OnReason\": \"reason*\",\n \"OnStatusReason\": \"status\"\n }\n ]\n }\n}\n```" }, "addLogDeliveryConfiguration": { "LogFormat": "Select Log Format. Supported format are `TEXT` and `JSON`.", "LogType": "Select Log Type `engine log` and `slow log`.", "LogGroup": "Sepcify Cloud Watch Log Group. If the Log Group do not exist, will create a new log group." }, "azureAddAgentPool": { "name": "Select Id to create a node pool", "InstanceType": "The size of the virtual machines that will form the nodes in this node pool.", "MinSize": "Set the minimum node count.", "MaxSize": "Set the maximum node count.", "DesiredCapacity": "Set the Desired capacity for the autoscaling.", "AllocationTag": "Allocation tags is the simplest way to constraint containers/pods with hosts/nodes. DuploCloud/Kubernetes Orchestrator will make sure containers will run on the hosts having same allocation tags.", "enableAutoScaling": "Enable this when you want kubernetes cluster to sized correctly for the current running workloads.", "azureAddAgentPoolMaxPods": "Specify to adjust the maximum number of pods per node.", "AvailabilityZones": "(Optional)Select availability zones for your AKS agent pool nodes.", "ScaleSetPriority": "The Virtual Machine Scale Set priority.\n 1. **Regular**: Creates a regular agent pool node with standard priority.\n 2. **Spot**: Create Spot AKS agent pool nodes.\n", "ScaleSetEvictionPolicy": "The eviction policy define what happens when Azure evicts.", "SpotMaxPrice": "(Optional) Defines the max price when creating or adding a Spot VM agent pool.", "MaxPods": "Enter the maximum number of pods per node. Default is 30." }, "k8sConfigmap": { "Name": "Enter Config Map name.", "Data": "Enter configuration data in key-value format.\n```yaml\n\nAPP_ENV: \"production\"\nLOG_LEVEL: \"debug\"\nAPI_URL: \"https://api.example.com\"\n\n```", "lables": "Specify labels\n```yaml\n\nteam: platform\napp.duplocloud.net/app-name: myapp-prod # label to group K8s resource by appname\n\n```" }, "AddS3Bucket": { "Name": "Specify name of the bucket.", "InTenantRegion": "Select the AWS region.", "s3ObjectLock": "Enable to configure S3 Object Lock. When you create a bucket with S3 Object Lock enabled, Amazon S3 automatically enables versioning for the bucket.", "enableBucketVersioning": "Enable to configure S3 versioning." }, "lbrule": { "itemConditionType0": "Following types of Rule Conditions are supported:\n1. **Path:** Route based on path patterns in the request URLs.\n2. **Host Header:** Route based on the host name of each request.\n3. **Source IP:** Route based on the source IP address of each request.\n4. **HTTP Request Method:** Route based on the HTTP request method of each request.\nFor details, refer [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#listener-rules).\n", "Priority": "(Optional) Priority would be added by DuploCloud when not entered. Note- A listener can't have multiple rules with the same priority.", "tgForwardActionArn": "Select the Target Group to which to route the traffic.", "itemVal00": "Enter Value applicable as per the Rule Condition selected. For details, refer [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#listener-rules)." }, "AddContainerAppBasic": { "Name": "Enter a unique name for your container app. Must be 1-32 characters, lowercase letters, numbers, or hyphens.", "AppEnvironment": "Select the environment in which the container app will run.", "WorkloadProfileName": "Choose a workload profile, e.g.,'Consumption' for serverless apps.", "EnableIngress": "Choose whether to enable ingress for this app. Disabled means no public HTTP access." }, "dataPipelineAddImport": { "Name": "Unique Data Pipeline Name", "PipeLineDef": "Please provide Data Pipeline defination json. Provide EmrCluster details If using existing EmrCluster.\n ```js\n {\n \"PipelineObjects\": [\n {\n \"Id\": \"Default\",\n \"Name\": \"Default\",\n \"Fields\": [\n {\n \"Key\": \"failureAndRerunMode\",\n \"StringValue\": \"CASCADE\"\n },\n {\n \"Key\": \"pipelineLogUri\",\n \"StringValue\": \"s3://YOUR-S3-FOLDER/logs/data-pipelines/\"\n },\n {\n \"Key\": \"scheduleType\",\n \"StringValue\": \"cron\"\n }\n ]\n },\n {\n \"Id\": \"EmrConfigurationId_Q9rpL\",\n \"Name\": \"DefaultEmrConfiguration1\",\n \"Fields\": [\n {\n \"Key\": \"configuration\",\n \"RefValue\": \"EmrConfigurationId_LFzOl\"\n },\n {\n \"Key\": \"type\",\n \"StringValue\": \"EmrConfiguration\"\n },\n {\n \"Key\": \"classification\",\n \"StringValue\": \"spark-env\"\n }\n ]\n },\n {\n \"Id\": \"ActionId_SUEgm\",\n \"Name\": \"TriggerNotificationOnFail\",\n \"Fields\": [\n {\n \"Key\": \"subject\",\n \"StringValue\": \"Backcountry-clickstream-delta-hourly: #{node.@pipelineId} Error: #{node.errorMessage}\"\n },\n {\n \"Key\": \"message\",\n \"StringValue\": \"Backcountry-clickstream-delta-hourly failed to run\"\n },\n {\n \"Key\": \"type\",\n \"StringValue\": \"SnsAlarm\"\n },\n {\n \"Key\": \"topicArn\",\n \"StringValue\": \"arn:aws:sns:us-west-2:269378226633:duploservices-pravin-test-del77-128329325849\"\n }\n ]\n },\n {\n \"Id\": \"EmrActivityObj\",\n \"Name\": \"EmrActivityObj\",\n \"Fields\": [\n {\n \"Key\": \"schedule\",\n \"RefValue\": \"ScheduleId_NfOUF\"\n },\n {\n \"Key\": \"step\",\n \"StringValue\": \"#{myEmrStep}\"\n },\n {\n \"Key\": \"runsOn\",\n \"RefValue\": \"EmrClusterObj\"\n },\n {\n \"Key\": \"type\",\n \"StringValue\": \"EmrActivity\"\n }\n ]\n },\n {\n \"Id\": \"EmrConfigurationId_LFzOl\",\n \"Name\": \"DefaultEmrConfiguration2\",\n \"Fields\": [\n {\n \"Key\": \"property\",\n \"RefValue\": \"PropertyId_NA18c\"\n },\n {\n \"Key\": \"classification\",\n \"StringValue\": \"export\"\n },\n {\n \"Key\": \"type\",\n \"StringValue\": \"EmrConfiguration\"\n }\n ]\n },\n {\n \"Id\": \"EmrClusterObj\",\n \"Name\": \"EmrClusterObj\",\n \"Fields\": [\n {\n \"Key\": \"taskInstanceType\",\n \"StringValue\": \"#{myTaskInstanceType}\"\n },\n {\n \"Key\": \"onFail\",\n \"RefValue\": \"ActionId_SUEgm\"\n },\n {\n \"Key\": \"maximumRetries\",\n \"StringValue\": \"1\"\n },\n {\n \"Key\": \"configuration\",\n \"RefValue\": \"EmrConfigurationId_Q9rpL\"\n },\n {\n \"Key\": \"coreInstanceCount\",\n \"StringValue\": \"#{myCoreInstanceCount}\"\n },\n {\n \"Key\": \"masterInstanceType\",\n \"StringValue\": \"#{myMasterInstanceType}\"\n },\n {\n \"Key\": \"releaseLabel\",\n \"StringValue\": \"#{myEMRReleaseLabel}\"\n },\n {\n \"Key\": \"type\",\n \"StringValue\": \"EmrCluster\"\n },\n {\n \"Key\": \"terminateAfter\",\n \"StringValue\": \"3 Hours\"\n },\n {\n \"Key\": \"bootstrapAction\",\n \"StringValue\": \"#{myBootstrapAction}\"\n },\n {\n \"Key\": \"taskInstanceCount\",\n \"StringValue\": \"#{myTaskInstanceCount}\"\n },\n {\n \"Key\": \"coreInstanceType\",\n \"StringValue\": \"#{myCoreInstanceType}\"\n },\n {\n \"Key\": \"applications\",\n \"StringValue\": \"spark\"\n }\n ]\n },\n {\n \"Id\": \"ScheduleId_NfOUF\",\n \"Name\": \"Every 10 hr\",\n \"Fields\": [\n {\n \"Key\": \"period\",\n \"StringValue\": \"10 Hours start time 2\"\n },\n {\n \"Key\": \"startDateTime\",\n \"StringValue\": \"2022-01-07T21:21:00\"\n },\n {\n \"Key\": \"type\",\n \"StringValue\": \"Schedule\"\n },\n {\n \"Key\": \"endDateTime\",\n \"StringValue\": \"2022-01-08T15:44:28\"\n }\n ]\n },\n {\n \"Id\": \"PropertyId_NA18c\",\n \"Name\": \"DefaultProperty1\",\n \"Fields\": [\n {\n \"Key\": \"type\",\n \"StringValue\": \"Property\"\n },\n {\n \"Key\": \"value\",\n \"StringValue\": \"/usr/bin/python3\"\n },\n {\n \"Key\": \"key\",\n \"StringValue\": \"PYSPARK_PYTHON\"\n }\n ]\n }\n ],\n \"ParameterValues\": [\n {\n \"Id\": \"myEMRReleaseLabel\",\n \"StringValue\": \"emr-6.1.0\"\n },\n {\n \"Id\": \"myMasterInstanceType\",\n \"StringValue\": \"m3.xlarge\"\n },\n {\n \"Id\": \"myBootstrapAction\",\n \"StringValue\": \"s3://YOUR-S3-FOLDER/bootstrap_actions/your_boottrap_and_python_lib_installer.sh\"\n },\n {\n \"Id\": \"myEmrStep\",\n \"StringValue\": \"command-runner.jar,spark-submit,--packages,io.delta:delta-core_2.12:0.8.0,--conf,spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension,--conf,spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog,--num-executors,2,--executor-cores,2,--executor-memory,2G,--conf,spark.driver.memoryOverhead=4096,--conf,spark.executor.memoryOverhead=4096,--conf,spark.dynamicAllocation.enabled=false,--name,PixelClickstreamData,--py-files,s3://YOUR-S3-FOLDER/libraries/librariy1.zip,--py-files,s3://YOUR-S3-FOLDER/libraries/librariy2.zip,s3://YOUR-S3-FOLDER/your_script.py, your_script_arg1, your_script_arg2\"\n },\n {\n \"Id\": \"myEmrStep\",\n \"StringValue\": \"command-runner.jar,aws,athena,start-query-execution,--query-string,MSCK REPAIR TABLE your_database.your_table,--result-configuration,OutputLocation=s3://YOUR-S3-FOLDER/logs/your_query_parquest\"\n },\n {\n \"Id\": \"myCoreInstanceType\",\n \"StringValue\": \"m3.xlarge\"\n },\n {\n \"Id\": \"myCoreInstanceCount\",\n \"StringValue\": \"1\"\n }\n ],\n \"ParameterObjects\": [\n {\n \"Id\": \"myEC2KeyPair\",\n \"Attributes\": [\n {\n \"Key\": \"helpText\",\n \"StringValue\": \"An existing EC2 key pair to SSH into the master node of the EMR cluster as the user \\\"hadoop\\\".\"\n },\n {\n \"Key\": \"description\",\n \"StringValue\": \"EC2 key pair\"\n },\n {\n \"Key\": \"optional\",\n \"StringValue\": \"true\"\n },\n {\n \"Key\": \"type\",\n \"StringValue\": \"String\"\n }\n ]\n },\n {\n \"Id\": \"myEmrStep\",\n \"Attributes\": [\n {\n \"Key\": \"helpLink\",\n \"StringValue\": \"https://docs.aws.amazon.com/console/datapipeline/emrsteps\"\n },\n {\n \"Key\": \"watermark\",\n \"StringValue\": \"s3://myBucket/myPath/myStep.jar,firstArg,secondArg\"\n },\n {\n \"Key\": \"helpText\",\n \"StringValue\": \"A step is a unit of work you submit to the cluster. You can specify one or more steps\"\n },\n {\n \"Key\": \"description\",\n \"StringValue\": \"EMR step(s)\"\n },\n {\n \"Key\": \"isArray\",\n \"StringValue\": \"true\"\n },\n {\n \"Key\": \"type\",\n \"StringValue\": \"String\"\n }\n ]\n },\n {\n \"Id\": \"myTaskInstanceType\",\n \"Attributes\": [\n {\n \"Key\": \"helpText\",\n \"StringValue\": \"Task instances run Hadoop tasks.\"\n },\n {\n \"Key\": \"description\",\n \"StringValue\": \"Task node instance type\"\n },\n {\n \"Key\": \"optional\",\n \"StringValue\": \"true\"\n },\n {\n \"Key\": \"type\",\n \"StringValue\": \"String\"\n }\n ]\n },\n {\n \"Id\": \"myCoreInstanceType\",\n \"Attributes\": [\n {\n \"Key\": \"default\",\n \"StringValue\": \"m1.medium\"\n },\n {\n \"Key\": \"helpText\",\n \"StringValue\": \"Core instances run Hadoop tasks and store data using the Hadoop Distributed File System (HDFS).\"\n },\n {\n \"Key\": \"description\",\n \"StringValue\": \"Core node instance type\"\n },\n {\n \"Key\": \"type\",\n \"StringValue\": \"String\"\n }\n ]\n },\n {\n \"Id\": \"myEMRReleaseLabel\",\n \"Attributes\": [\n {\n \"Key\": \"default\",\n \"StringValue\": \"emr-5.13.0\"\n },\n {\n \"Key\": \"helpText\",\n \"StringValue\": \"Determines the base configuration of the instances in your cluster, including the Hadoop version.\"\n },\n {\n \"Key\": \"description\",\n \"StringValue\": \"EMR Release Label\"\n },\n {\n \"Key\": \"type\",\n \"StringValue\": \"String\"\n }\n ]\n },\n {\n \"Id\": \"myCoreInstanceCount\",\n \"Attributes\": [\n {\n \"Key\": \"default\",\n \"StringValue\": \"2\"\n },\n {\n \"Key\": \"description\",\n \"StringValue\": \"Core node instance count\"\n },\n {\n \"Key\": \"type\",\n \"StringValue\": \"Integer\"\n }\n ]\n },\n {\n \"Id\": \"myTaskInstanceCount\",\n \"Attributes\": [\n {\n \"Key\": \"description\",\n \"StringValue\": \"Task node instance count\"\n },\n {\n \"Key\": \"optional\",\n \"StringValue\": \"true\"\n },\n {\n \"Key\": \"type\",\n \"StringValue\": \"Integer\"\n }\n ]\n },\n {\n \"Id\": \"myBootstrapAction\",\n \"Attributes\": [\n {\n \"Key\": \"helpLink\",\n \"StringValue\": \"https://docs.aws.amazon.com/console/datapipeline/emr_bootstrap_actions\"\n },\n {\n \"Key\": \"helpText\",\n \"StringValue\": \"Bootstrap actions are scripts that are executed during setup before Hadoop starts on every cluster node.\"\n },\n {\n \"Key\": \"description\",\n \"StringValue\": \"Bootstrap action(s)\"\n },\n {\n \"Key\": \"isArray\",\n \"StringValue\": \"true\"\n },\n {\n \"Key\": \"optional\",\n \"StringValue\": \"true\"\n },\n {\n \"Key\": \"type\",\n \"StringValue\": \"String\"\n }\n ]\n },\n {\n \"Id\": \"myMasterInstanceType\",\n \"Attributes\": [\n {\n \"Key\": \"default\",\n \"StringValue\": \"m1.medium\"\n },\n {\n \"Key\": \"helpText\",\n \"StringValue\": \"The Master instance assigns Hadoop tasks to core and task nodes, and monitors their status.\"\n },\n {\n \"Key\": \"description\",\n \"StringValue\": \"Master node instance type\"\n },\n {\n \"Key\": \"type\",\n \"StringValue\": \"String\"\n }\n ]\n }\n ]\n }\n ```", "PipeLineDefAws": "Please provide Data Pipeline defination json. Provide EmrCluster details If using existing EmrCluster.\n ```js\n {\n \"objects\": [\n {\n \"failureAndRerunMode\": \"CASCADE\",\n \"resourceRole\": \"DataPipelineDefaultResourceRole\",\n \"role\": \"DataPipelineDefaultRole\",\n \"pipelineLogUri\": \"s3://YOUR-S3-FOLDER/logs/data-pipelines/\",\n \"scheduleType\": \"cron\",\n \"name\": \"Default\",\n \"id\": \"Default\"\n },\n {\n \"configuration\": {\n \"ref\": \"EmrConfigurationId_LFzOl\"\n },\n \"name\": \"DefaultEmrConfiguration1\",\n \"id\": \"EmrConfigurationId_Q9rpL\",\n \"type\": \"EmrConfiguration\",\n \"classification\": \"spark-env\"\n },\n {\n \"role\": \"DataPipelineDefaultRole\",\n \"subject\": \"Backcountry-clickstream-delta-hourly: #{node.@pipelineId} Error: #{node.errorMessage}\",\n \"name\": \"TriggerNotificationOnFail\",\n \"id\": \"ActionId_SUEgm\",\n \"message\": \"Backcountry-clickstream-delta-hourly failed to run\",\n \"type\": \"SnsAlarm\",\n \"topicArn\": \"arn:aws:sns:us-west-2:269378226633:duploservices-pravin-test-del77-128329325849\"\n },\n {\n \"schedule\": {\n \"ref\": \"ScheduleId_NfOUF\"\n },\n \"name\": \"EmrActivityObj\",\n \"step\": \"#{myEmrStep}\",\n \"runsOn\": {\n \"ref\": \"EmrClusterObj\"\n },\n \"id\": \"EmrActivityObj\",\n \"type\": \"EmrActivity\"\n },\n {\n \"name\": \"DefaultEmrConfiguration2\",\n \"property\": {\n \"ref\": \"PropertyId_NA18c\"\n },\n \"id\": \"EmrConfigurationId_LFzOl\",\n \"classification\": \"export\",\n \"type\": \"EmrConfiguration\"\n },\n {\n \"taskInstanceType\": \"#{myTaskInstanceType}\",\n \"onFail\": {\n \"ref\": \"ActionId_SUEgm\"\n },\n \"maximumRetries\": \"1\",\n \"configuration\": {\n \"ref\": \"EmrConfigurationId_Q9rpL\"\n },\n \"coreInstanceCount\": \"#{myCoreInstanceCount}\",\n \"masterInstanceType\": \"#{myMasterInstanceType}\",\n \"releaseLabel\": \"#{myEMRReleaseLabel}\",\n \"type\": \"EmrCluster\",\n \"terminateAfter\": \"3 Hours\",\n \"availabilityZone\": \"us-west-2b\",\n \"bootstrapAction\": \"#{myBootstrapAction}\",\n \"taskInstanceCount\": \"#{myTaskInstanceCount}\",\n \"name\": \"EmrClusterObj\",\n \"coreInstanceType\": \"#{myCoreInstanceType}\",\n \"keyPair\": \"#{myEC2KeyPair}\",\n \"id\": \"EmrClusterObj\",\n \"applications\": [\n \"spark\"\n ]\n },\n {\n \"period\": \"10 Hours start time 2\",\n \"startDateTime\": \"2022-01-07T21:21:00\",\n \"name\": \"Every 10 hr\",\n \"id\": \"ScheduleId_NfOUF\",\n \"type\": \"Schedule\",\n \"endDateTime\": \"2022-01-08T15:44:28\"\n },\n {\n \"name\": \"DefaultProperty1\",\n \"id\": \"PropertyId_NA18c\",\n \"type\": \"Property\",\n \"value\": \"/usr/bin/python3\",\n \"key\": \"PYSPARK_PYTHON\"\n }\n ],\n \"parameters\": [\n {\n \"helpText\": \"An existing EC2 key pair to SSH into the master node of the EMR cluster as the user \\\"hadoop\\\".\",\n \"description\": \"EC2 key pair\",\n \"optional\": \"true\",\n \"id\": \"myEC2KeyPair\",\n \"type\": \"String\"\n },\n {\n \"helpLink\": \"https://docs.aws.amazon.com/console/datapipeline/emrsteps\",\n \"watermark\": \"s3://myBucket/myPath/myStep.jar,firstArg,secondArg\",\n \"helpText\": \"A step is a unit of work you submit to the cluster. You can specify one or more steps\",\n \"description\": \"EMR step(s)\",\n \"isArray\": \"true\",\n \"id\": \"myEmrStep\",\n \"type\": \"String\"\n },\n {\n \"helpText\": \"Task instances run Hadoop tasks.\",\n \"description\": \"Task node instance type\",\n \"optional\": \"true\",\n \"id\": \"myTaskInstanceType\",\n \"type\": \"String\"\n },\n {\n \"default\": \"m1.medium\",\n \"helpText\": \"Core instances run Hadoop tasks and store data using the Hadoop Distributed File System (HDFS).\",\n \"description\": \"Core node instance type\",\n \"id\": \"myCoreInstanceType\",\n \"type\": \"String\"\n },\n {\n \"default\": \"emr-5.13.0\",\n \"helpText\": \"Determines the base configuration of the instances in your cluster, including the Hadoop version.\",\n \"description\": \"EMR Release Label\",\n \"id\": \"myEMRReleaseLabel\",\n \"type\": \"String\"\n },\n {\n \"default\": \"2\",\n \"description\": \"Core node instance count\",\n \"id\": \"myCoreInstanceCount\",\n \"type\": \"Integer\"\n },\n {\n \"description\": \"Task node instance count\",\n \"optional\": \"true\",\n \"id\": \"myTaskInstanceCount\",\n \"type\": \"Integer\"\n },\n {\n \"helpLink\": \"https://docs.aws.amazon.com/console/datapipeline/emr_bootstrap_actions\",\n \"helpText\": \"Bootstrap actions are scripts that are executed during setup before Hadoop starts on every cluster node.\",\n \"description\": \"Bootstrap action(s)\",\n \"isArray\": \"true\",\n \"optional\": \"true\",\n \"id\": \"myBootstrapAction\",\n \"type\": \"String\"\n },\n {\n \"default\": \"m1.medium\",\n \"helpText\": \"The Master instance assigns Hadoop tasks to core and task nodes, and monitors their status.\",\n \"description\": \"Master node instance type\",\n \"id\": \"myMasterInstanceType\",\n \"type\": \"String\"\n }\n ],\n \"values\": {\n \"myEMRReleaseLabel\": \"emr-6.1.0\",\n \"myMasterInstanceType\": \"m3.xlarge\",\n \"myBootstrapAction\": \"s3://YOUR-S3-FOLDER/bootstrap_actions/your_boottrap_and_python_lib_installer.sh\",\n \"myEmrStep\": [\n \"command-runner.jar,spark-submit,--packages,io.delta:delta-core_2.12:0.8.0,--conf,spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension,--conf,spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog,--num-executors,2,--executor-cores,2,--executor-memory,2G,--conf,spark.driver.memoryOverhead=4096,--conf,spark.executor.memoryOverhead=4096,--conf,spark.dynamicAllocation.enabled=false,--name,PixelClickstreamData,--py-files,s3://YOUR-S3-FOLDER/libraries/librariy1.zip,--py-files,s3://YOUR-S3-FOLDER/libraries/librariy2.zip,s3://YOUR-S3-FOLDER/your_script.py, your_script_arg1, your_script_arg2\",\n \"command-runner.jar,aws,athena,start-query-execution,--query-string,MSCK REPAIR TABLE your_database.your_table,--result-configuration,OutputLocation=s3://YOUR-S3-FOLDER/logs/your_query_parquest\"\n ],\n \"myCoreInstanceType\": \"m3.xlarge\",\n \"myCoreInstanceCount\": \"1\"\n }\n }\n\n ```" }, "lbListener": { "lbType": "Select the type.", "ContainerPort": "Port exposed by the container.", "ExternalPort": "Port user want to expose using loadbalancer. It may not be same as to the container port.\n", "customCidrs": "Specify CIDR and click on Add. You can add multiple CIDR Values.\n", "visibility": "Visibility of the service to be exposed which can be either Internal or Public.\n1 **Internal Only**: When this option is selected service will accessible within the infrastructure.\n2. **Public**: When this option is selected, service will be accessible to the internet. \n", "HealthCheck": "Health check URL for this container. This required parameter when Application loadbalancer is selected. \nIt helps ALB to decide whether service is up and running.\n", "BackendProtocol": "Protocol for the service exposed by the container.", "BeProtocolVersion": "Protocol version only applicable when application loadbalancer is selected\n1. **HTTP1**: Send requests to targets using HTTP/1.1. Supported when the request protocol is HTTP/1.1 or HTTP/2.\n2. **HTTP2**: Send requests to targets using HTTP/2. Supported when the request protocol is HTTP/2 or gRPC, but gRPC-specific features are not available.\n3. **gRPC**: Send requests to targets using gRPC. Supported when the request protocol is gRPC.\n", "Certificates": "Certificate to be attached to loadbalancer to expose the service over SSL.", "TgCount": "Target group count.", "HealthyThresholdCount": "The number of consecutive health checks successes required before considering an unhealthy target healthy.", "UnHealthyThresholdCount": "The number of consecutive health check failures required before considering a target unhealthy.", "HealthCheckTimeoutSeconds": "The amount of time, in seconds, during which no response means a failed health check.", "HealthCheckIntervalSeconds": "The approximate amount of time between health checks of an individual target.", "HttpSuccessCode": "The HTTP codes to use when checking for a successful response from a target. You can specify multiple values (for example, \"200,202\").", "GrpcSuccessCode": "The gRPC codes to use when checking for a successful response from a target. You can specify multiple values (for example, \"20,25\") or a range of values (for example, \"0-99\"). Only applicable when protocol version is selected as **GRPc**." }, "redisInstance": { "Name": "The ID of the instance or a fully qualified identifier for the instance.", "DisplayName": "Specify optional user-provided name for the instance.", "Tier": "Select the service tier of the instance.\n 1. **Basic**: *Tier0* standalone instance\n 2. **Standard**: *Tier1* highly available primary/replica instances\n", "RedisVersion": "Specify the version of Redis software.", "MemorySizeGb": "Redis memory size in GiB.", "AuthEnabled": "Indicates whether OSS Redis AUTH is enabled for the instance. If set to \"true\" AUTH is enabled on the instance.", "TransitEncryptionEnabled": "Select to enable the TLS mode of the Redis instance.", "ReadReplicaEnabled": "Select to enable Read replica mode.", "ReplicasCount": "Specify the number of replica nodes. The valid range for the Standard Tier with read replicas enabled is [1-5].", "Redis Config": "Specify Redis Configuration. Refer [here](https://cloud.google.com/memorystore/docs/redis/supported-redis-configurations?authuser=1&_ga=2.150484978.-535098261.1654188041).\nExample: \n```js\n{\n \"activedefrag\":\"yes\",\n \"maxmemory-policy\":\"allkeys-lru\"\n}\n```\n", "Labels": "Specify labels in below format\n```js\n{\n \"key\" : \"value\"\n}\n```\n" }, "appserviceplan": { "name": "Specify the name which should be used for this Service Plan.", "Name": "Enter a unique name for the App Service Plan.", "tier": "Select the SKU tier for the plan.", "platform": "Select the O/S type for the App Services to be hosted in this plan. Supported values include `Windows` and `Linux`.", "location": "Select the Azure Region where the Service Plan should exist.", "serversize": "Select the SKU size for the plan.", "noofinstances": "Specify the number of Workers (instances) to be allocated." }, "AddAppRunnerServiceBasic": { "ServiceName": "Enter a friendly name to identify your App Runner service.", "ImageRepositoryType": "The type of image repository. Choose between 'ECR' or 'ECR Public'.", "ImageIdentifier": "The full URI of the container image (e.g., 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-app:latest).", "Port": "The port your application listens on inside the container (e.g., 8080).", "Cpu": "The amount of CPU to allocate for the service. Example: 1024 (1 vCPU), 2048 (2 vCPU).", "Memory": "The amount of memory to allocate for the service. Example: 2048 (2GB), 4096 (4GB).", "AutoDeploymentsEnabled": "Enable or disable automatic deployments when the source image changes.", "ObservabilityEnabled": "Enable or disable App Runner observability (CloudWatch Application Insights).", "EnvironmentVariables": "Environment variables to set for your App Runner service.\n```yaml\nENV_VAR_NAME: value\n```", "EnvironmentSecrets": "Key-value pairs for AWS Secrets Manager secrets or SSM parameters to inject as environment variables.\n```yaml\n# For Secrets Manager (entire secret):\nENV_VAR_NAME: arn:aws:secretsmanager:region:account-id:secret:secret-name\n# For specific key in a Secrets Manager secret:\nENV_VAR_NAME: arn:aws:secretsmanager:region:account-id:secret:secret-name:json-key:version-stage:version-id\n# For SSM Parameter:\nENV_VAR_NAME: arn:aws:ssm:region:account-id:parameter/param-name\n```", "Tags": "Key-value pairs to associate with the App Runner service resource.\n```yaml\n# These will be converted to the AWS Tag format { Key: \"KeyName1\", Value: \"Value1\" }\nKeyName1: Value1\nKeyName2: Value2\n`````", "ObservabilityConfigurationArn": "ARN of the observability configuration that is associated with the service. Specified only when Observability is enabled." }, "topic": { "Name": "Name of the Pub/Sub Topic.", "Labels": "Specify key/value pair." }, "emrClusterAdd": { "Name": "Unique Cluster Name", "ReleaseLabel": "Choose ReleaseLabel or to provide ReleaseLabel not in list choose Other.", "IdleTimeBeforeShutdown": "Idle time (when no job is running) in hours before terminating the cluster.", "StepConcurrencyLevel": "Number of steps that can be executed concurrently. This setting should depend on the available resources.", "Applications": "Applications to be installed in cluster (master and slaves).\n```js\n[\n{\n\"Name\" : \"Hadoop\"\n},\n{\n\"Name\" : \"JupyterHub\"\n},\n{\n\"Name\" : \"Spark\"\n},\n{\n\"Name\" : \"Hive\"\n}\n]\n```\n", "Configurations": "Configurations to be installed in cluster (master and slaves).\n```js\n[\n{\n\"Classification\" : \"hive-site\",\n\"Properties\" : {\n\"hive.metastore.client.factory.class\" : \"com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory\",\n\"spark.sql.catalog.my_catalog\" : \"org.apache.iceberg.spark.SparkCatalog\",\n\"spark.sql.catalog.my_catalog.catalog-impl\" : \"org.apache.iceberg.aws.glue.GlueCatalog\",\n\"spark.sql.catalog.my_catalog.io-impl\" : \"org.apache.iceberg.aws.s3.S3FileIO\",\n\"spark.sql.catalog.my_catalog.lock-impl\" : \"org.apache.iceberg.aws.glue.DynamoLockManager\",\n\"spark.sql.catalog.my_catalog.lock.table\" : \"myGlueLockTable\",\n\"spark.sql.catalog.sampledb.warehouse\" : \"s3://name-of-my-bucket/icebergcatalog\"\n}\n}\n]\n```", "BootstrapActions": "BootstrapActions during cluster setup.\n```js\n[\n{\n\"Name\": \"InstallApacheIceberg\",\n\"ScriptBootstrapAction\": {\n\"Args\": [\n\"name\",\n\"value\"\n],\n\"Path\": \"s3://name-of-my-bucket/bootstrap-iceberg.sh\"\n}\n}\n]\n```", "Steps": "Jobs to be executed on cluster. Please update s3 and py file.\n```js\n[\n{\n\"ActionOnFailure\" : \"CONTINUE\",\n\"Name\" : \"sparkstepTest\",\n\"HadoopJarStep\" : {\n\"Jar\" : \"command-runner.jar\",\n\"Args\" : [\n\"spark-submit\",\n\"s3://YOUR-S3-FOLDER/script3.py\"\n]\n}\n}\n]\n```", "InstanceFleets": "InstanceFleets example.\n```js\n[\n{\n\"Name\" : \"Masterfleet\",\n\"InstanceFleetType\" : \"MASTER\",\n\"TargetSpotCapacity\" : 1,\n\"LaunchSpecifications\" : {\n\"SpotSpecification\" : {\n\"TimeoutDurationMinutes\" : 120,\n\"TimeoutAction\" : \"SWITCH_TO_ON_DEMAND\"\n}\n},\n\"InstanceTypeConfigs\" : [\n{\n\"InstanceType\" : \"m5.xlarge\",\n\"BidPrice\" : \"0.89\"\n}\n]\n},\n{\n\"Name\" : \"Corefleet\",\n\"InstanceFleetType\" : \"CORE\",\n\"TargetSpotCapacity\" : 1,\n\"TargetOnDemandCapacity\" : 1,\n\"LaunchSpecifications\" : {\n\"OnDemandSpecification\" : {\n\"AllocationStrategy\" : \"lowest-price\",\n\"CapacityReservationOptions\" : {\n\"UsageStrategy\" : \"use-capacity-reservations-first\",\n\"CapacityReservationResourceGroupArn\" : \"String\"\n}\n},\n\"SpotSpecification\" : {\n\"AllocationStrategy\" : \"capacity-optimized\",\n\"TimeoutDurationMinutes\" : 120,\n\"TimeoutAction\" : \"TERMINATE_CLUSTER\"\n}\n},\n\"InstanceTypeConfigs\" : [\n{\n\"InstanceType\" : \"m4.xlarge\",\n\"BidPriceAsPercentageOfOnDemandPrice\" : 100\n}\n]\n},\n{\n\"Name\" : \"Taskfleet\",\n\"InstanceFleetType\" : \"TASK\",\n\"TargetSpotCapacity\" : 1,\n\"LaunchSpecifications\" : {\n\"OnDemandSpecification\" : {\n\"AllocationStrategy\" : \"lowest-price\",\n\"CapacityReservationOptions\" : {\n\"CapacityReservationPreference\" : \"none\"\n}\n},\n\"SpotSpecification\" : {\n\"TimeoutDurationMinutes\" : 120,\n\"TimeoutAction\" : \"TERMINATE_CLUSTER\"\n}\n},\n\"InstanceTypeConfigs\" : [\n{\n\"InstanceType\" : \"m4.xlarge\",\n\"BidPrice\" : \"0.89\"\n}\n]\n}\n]\n\n```", "InstanceGroups": "InstanceGroups example.\n```js\n[\n{\n\"Name\": \"Master\",\n\"Market\": \"ON_DEMAND\",\n\"InstanceRole\": \"MASTER\",\n\"InstanceType\": \"m4.large\",\n\"InstanceCount\": 1,\n\"EbsConfiguration\": {\n\"EbsBlockDeviceConfigs\": [\n{\n\"VolumeSpecification\": {\n\"VolumeType\": \"gp2\",\n\"SizeInGB\": 10\n},\n\"VolumesPerInstance\": 1\n}\n\n],\n\"EbsOptimized\": false\n}\n},\n{\n\"Name\": \"Core\",\n\"Market\": \"ON_DEMAND\",\n\"InstanceRole\": \"CORE\",\n\"InstanceType\": \"m4.large\",\n\"InstanceCount\": 1,\n\"EbsConfiguration\": {\n\"EbsBlockDeviceConfigs\": [\n{\n\"VolumeSpecification\": {\n\"VolumeType\": \"gp2\",\n\"SizeInGB\": 10\n},\n\"VolumesPerInstance\": 1\n}\n\n],\n\"EbsOptimized\": false\n}\n},\n{\n\"Name\": \"Task\",\n\"Market\": \"ON_DEMAND\",\n\"InstanceRole\": \"TASK\",\n\"InstanceType\": \"m4.large\",\n\"InstanceCount\": 1,\n\"EbsConfiguration\": {\n\"EbsBlockDeviceConfigs\": [\n{\n\"VolumeSpecification\": {\n\"VolumeType\": \"gp2\",\n\"SizeInGB\": 10\n},\n\"VolumesPerInstance\": 1\n}\n\n],\n\"EbsOptimized\": false\n}\n}\n\n]\n\n```", "ManagedScalingPolicy": "ManagedScalingPolicy example.\n```js\n{\n\"ComputeLimits\" : {\n\"UnitType\" : \"Instances\",\n\"MinimumCapacityUnits\" : 2,\n\"MaximumCapacityUnits\" : 5,\n\"MaximumOnDemandCapacityUnits\" : 5,\n\"MaximumCoreCapacityUnits\" : 3\n}\n}\n```", "VisibleToAllUsers": "Cluster Visible To All Users." }, "AddEcache": { "CacheType": "Select the cache engine to be used for this cache cluster.", "Name": "Enter the Cluster Identifier.", "EngineVersion": "Select the version number of the cache engine to be used. If not set, defaults to the latest version.", "Size": "Select the NodeType.", "GlobalReplicationGroupId": "Specify the global replication group used to link primary and secondary datastores across regions.", "GlobalReplicationGroupDescription": "Provide a description for the global replication group to help identify its purpose.", "SecondaryTenantId": "Select the region where the secondary datastore is created for cross-region replication.", "Replicas": "Specify the initial number of cache nodes that the cache cluster will have.", "AutomaticFailoverEnabled": "Enable automatic failover to the secondary datastore if the primary region becomes unavailable.", "MultiAzEnabled": "Enable Multi-AZ deployment to improve availability within a region.", "EnableEncryptionAtTransit": "Select if Encryption At Transit is needed.", "ParameterGroupName": "Specify the name of the parameter group to associate with this cache cluster.", "SnapshotArnsInput": "Specify the ARN of a Redis RDB snapshot file stored in Amazon S3. Example- `arn:aws:s3:::s3-backup-foldername/backupobject.rdb`", "ClusteringEnabled": "Enable to create Redis in Cluster mode.", "NoOfShards": "Specify the number of Shards for Cluster.", "SnapshotName": "Select the snapshot/backup you want to use for creating the Redis.", "SnapshotRetentionLimit": "Specify the retention limit in days (1-35).", "SnapshotWindowStartTime": "Specify the daily time when the automated snapshot window begins.", "SnapshotWindowEndTime": "Specify the duration, in hours, of the automated snapshot window.", "Kms": "Select a KMS Key.", "LogDeliveryConfiguration": "Enables exporting Redis engine logs and slow logs to Amazon CloudWatch Logs for monitoring and troubleshooting. Select `CloudWatch` to configure log delivery settings.", "EnableEncryptionAtRest": "Enable/disable encryption of data stored on disk using AWS-managed keys. Recommended for protecting sensitive data at rest.", "Serverless": "Select to enable automatic scaling based on usage. Capacity scales up or down automatically, and billing is based on actual consumption rather than provisioned capacity.", "Description": "Enter a description for the ElastiCache cluster." }, "pgAuthCreate": { "PrincipalName": "Name of the Azure AD principal (user, group, or service principal) that will authenticate to the PostgreSQL Flexible Server.", "ObjectId": "Object ID (or SID) of the Azure AD principal. This uniquely identifies the principal in Azure Active Directory.", "TenantId": "Azure Active Directory Tenant ID where the principal is defined.", "Role": "Type of Azure AD principal. Select whether this entry represents a user, group, or service principal." }, "addInfraVaultSecret": { "name": "Enter a unique name for the secret. Use your tenant name as prefix.", "value": "Enter the secret value to store.", "contentType": "Specify the type of secret." }, "AddManagedSql": { "Name": "Specify a unique name for your Managed SQL instance. Must not contain spaces.", "Username": "Enter the administrator username for the SQL instance.", "Password": "Enter a strong password for the SQL instance.", "serviceTier": "Select the service tier for the instance.", "hwOptions": "Select the hardware generation (e.g., Gen5) for the SQL instance.", "vcoreOption": "Select the number of virtual cores (vCores) for the instance.", "storage": "Specify the storage size in GB for the SQL instance.", "subnetName": "Select the subnet where the SQL instance will be deployed." }, "AddRdsAuroraReplica": { "Identifier": "Please provide a unique identifier for the RDS replica instance that is unique across all tenants. The cluster identifier is used to determine the cluster's endpoint. An identifier cannot end with a hyphen or contain two consecutive hyphens or start with a number. It should also be 49 characters or shorter and must be in all lowercase.", "Engine": "Select Database engine for creating RDS instance.", "EngineVersion": "Select database engine version. If not selected latest version will be used while creating database. Select type as 'Other' if you don't see desired option in dropdown list.", "DbSize": "Instance size for RDS. Select type as 'Other' if you don't see desired option in dropdown list.", "MinCapacity": "Set the minimum capacity unit for the DB cluster. Each capacity unit is equivalent to a specific compute and memory configuration.", "MaxCapacity": "Set the maximum capacity unit for the DB cluster. Each capacity unit is equivalent to a specific compute and memory configuration.", "AvailabilityZone": "Select availability zone for high availability." }, "azureAddVMScaleSet": { "Name": "The name of the virtual machine scale set resource.", "Subnets": "Select the subnet.", "InstanceType": "The size of the Virtual Machine.", "Capacity": "The number of virtual machines in the scale set.", "ImageId": "Choose the Image for the VM. Image should be compatible with the agent platform. Select type as \"Other\" if you don't see desired option in dropdown.", "Username": "The administrator username for the VM.", "Password": "The administrator username for the VM." }, "k8sServiceBasicForm": { "ServiceName": "Any friendly name to identify your service.", "ImageName": "Enter Image Name. Example `busybox:latest`" }, "AddContainerRegistry": { "Name": "Specify the name of the Container Registry.", "Tier": "Specify the SKU name of the container registry.", "AdminUserEnabled": "Select to enable the admin user.", "AnonymousPullEnabled": "Select to allow anonymous (unauthenticated) pull access to this Container Registry. This is only supported on resources with the `Standard` or `Premium` SKU.", "DataEndpointEnabled": "Select to enable dedicated data endpoints for this Container Registry. This is only supported on resources with the `Premium` SKU", "PublicNetworkAccess": "Select to enable/disable public network access for the container registry." }, "selectIotDeviePackageCert": { "certificateId": "Choose the certificate to download the device package." }, "start-shell": { "secretName": "Specify name of the secret, must be unique.", "secretType": "Select the type of secret. [Details](https://kubernetes.io/docs/concepts/configuration/secret/)", "secretData": "Enter Secret Details. Refer below for example\n 1. **Opaque**: arbitrary user-defined data. [example](https://kubernetes.io/docs/concepts/configuration/secret/#opaque-secrets)\n 2. **kubernetes.io/dockercfg**: serialized ~/.dockercfg file. [example](https://kubernetes.io/docs/concepts/configuration/secret/#docker-config-secrets)\n 3. **kubernetes.io/dockerconfigjson**: serialized ~/.docker/config.json file. [example](https://kubernetes.io/docs/concepts/configuration/secret/#docker-config-secrets)\n 4. **kubernetes.io/basic-auth**: credentials for basic authentication. [example](https://kubernetes.io/docs/concepts/configuration/secret/#basic-authentication-secret)\n 5. **kubernetes.io/ssh-auth**: credentials for SSH authentication. [example](https://kubernetes.io/docs/concepts/configuration/secret/#ssh-authentication-secrets)\n 6. **kubernetes.io/tls**: data for a TLS client or server. [example](https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets)\n", "secretLabels": "Add labels to a Secret for better organization and selection. \n```js\n{\n \"environment\": \"production\",\n \"type\": \"credentials\"\n}\n ```\n", "secretAnnotations": "Specify annotations in below format.\n```js\n{\n \"description\": \"This secret stores database credentials\",\n \"level\": \"high\"\n}\n ```" }, "AddMySQLFlexi": { "Name": "Specify the name which should be used for this MySQL Flexible Server.", "AdminUsername": "Specify the Administrator Login for the MySQL Flexible Server.", "SkuTier": "Select the compute tier. Hardware list would be populated based on the Tier selection.", "SkuName": "Select the Hardware as per the requirement.", "AdminPassword": "Specify the Password associated with the administrator_login for the MySQL Flexible Server.", "Version": "Select the version of the database.", "StorageSizeGB": "Select the maximum storage allowed. Possible values are between `20` and `16384`.", "Iops": "Specify the storage IOPS for the MySQL Flexible Server.", "AutoGrow": "Select to enable Storage Auto Grow.", "SubnetId": "Select the subnet to create the MySQL Flexible Server.", "BackupRetentionDays": "Enter the backup retention days. Accepted values are between 1 and 35 days", "GeoRedundantBackup": "Select to enable geo redundant backup.", "ProcessorType": "Select the processor.", "HighAvailability": "Select value\n1. **Disabled** - HighAvailability Mode disabled.\n2. **Same zone** - a standby server is always available within the same zone as the primary server\n3. **Zone redundant** - a standby server is always available within another zone in the same region as the primary server\nAuto-grow would be by default enabled when high availability zone is enabled." }, "emrserverlsess_add_runjob_sub_form_basics": { "ApplicationName": "EMR Serverless Application Name.", "Type": "EMR Serverless Application Architecture type.", "Name": "Enter Application Run Job Name.", "ApplicationId": "Application Id", "ExecutionTimeoutMinutes": "Specify execution time in minutes." }, "cwrule-target-add": { "Target Name": "Specify the name of the rule you want to add targets to.", "Target Type": "Select the Target type." }, "registerTarget": { "loadbalancers": "Select the loadbalancer. For TargetType as alb, loadbalancer must have at least one listener that matches the target group port.", "ip": "Enter IP address.", "hosts": "Select hosts.", "lambdas": "Select the Lambda function(s) you want to register." }, "addLambda": { "FunctionName": "Unique name for your Lambda Function.", "Description": "Description of what your Lambda Function does.", "PackageType": "Lambda deployment package type.\n1. **Zip:** Upload a .zip file as your deployment package using Amazon Simple Storage Service (Amazon S3).\n2. **Image:** Upload your container images to Amazon Elastic Container Registry (Amazon ECR).\n", "RuntimeValue": "Runtime is required if the deployment package type is a `Zip` file archive. Select the runtime compatible to the function code.", "MemorySize": "Amount of memory in MB your Lambda Function can use at runtime.", "TimeoutInt": "The amount of time in seconds that Lambda allows a function to run before stopping it.", "Handler": "The name of the method within your code that Lambda calls to execute your function. Handler is required if the deployment package type is a `Zip` file archive.", "EnvironmentVariables": "Map of environment variables that are accessible from the function code during execution.\nExample:\n```js\n{\n \"Variables\": {\"foo\":\"bar\"}\n}\n```\n", "S3Bucket": "Select the S3 bucket location containing the function's deployment package.", "S3BucketKey": "Enter the S3 key of an object containing the function's deployment package.", "ImageConfiguration": "Specify container image configuration values.\nExample:\n```js\n{\n \"Command\": [\n \"app.handler\"\n ],\n \"EntryPoint\": [\n \"/usr/local/bin/python\",\n \"-m\",\n \"awslambdaruntimeclient\"\n ],\n \"WorkingDirectory\": \"/var/task\"\n}\n```\n", "ImageUri": "Enter the ECR image URI containing the function's deployment package.", "EphemeralStorage": "Specify Ephemeral Storage in MB, allows you to configure the storage upto 10240 MB. The default value set to 512 MB.", "DeadLetterQueue": "Select SQS queue or SNS Topic to send events from an an asynchronous invocation.", "MaximumRetryAttempts": "Specify the maximum number of times to retry when the function returns an error.", "MaximumEventAgeInSeconds": "Specify the maximum amount of time in seconds to keep unprocessed events in the queue." }, "updateASG": { "MinSize": "Minimum number of instances that should always be running in this Autoscaling Group.", "MaxSize": "Maximum number of instances that this Autoscaling Group can scale out to.", "EnableClusterAutoScaler": "Enable automatic scaling of this group using the cluster autoscaler.", "CanScaleFromZeroUpdate": "Allow this group to scale down to zero instances when not needed." }, "emr_serverless_add_sub_form_limit": { "MaximumCapacityCpu": "Specify the maximum allowed CPU for an application.", "MaximumCapacityMemory": "Specify the maximum allowed resources for an application.", "MaximumCapacityDisk": "Specify the maximum allowed disk for an application." }, "AddContainerAppAdvanced": { "ContainerName": "Enter a unique name for your container. Only lowercase letters, numbers, hyphens, and periods are allowed.", "ContainerNameImage": "Specify the container image to deploy, including the registry and tag, e.g., xyz.registry.io/containerapps-helloworld:latest.", "Server": "Enter the container registry server URL where your image is hosted.", "UserName": "Provide the username for the registry if authentication is required.", "Password": "Provide the password for the registry if authentication is required.", "CpuCores": "Specify the number of CPU cores to allocate to this container.", "Memory": "Specify the memory to allocate, e.g., 1.25Gi or 4Gi.", "CommandOverride": "Override the default container command here.", "Argumentsoverride": "Override the default container arguments here." }, "K8sJobBasicForm": { "JobName": "Specify the name of the Job, must be unique. Minimum 3 characters.", "Schedule": "Specify schedule in Cron format. Example- `0 0 * * 0`. This will run once a week at midnight on Sunday morning. For more help on cron schedule expressions, click [here](https://crontab.guru/#0_0_*_*_0)", "ImageName": "Specify the image name. Example- `perl:5.34.0`.", "CleanupFinished": "Specify time in seconds.\n1. **if set as 0**: Job becomes eligible to be deleted immediately after it finishes.\n2. **if field is unset/blank**: Jobs won't be automatically deleted. \n3. **if the field is set**: Job is eligible to be automatically delete after the seconds specified\n", "restartPolicy": "1. **Never**: Job won't restart on failure.\n2. **OnFailure**: Job will restart on failure based on the retry count set.\n", "Retries": "Set the retry limit. Once the BackoffLimit is reached the Job will be marked as failed.\n", "envVariables": "Environment variables to be passed to the jobs in the YAML format.\n```yaml\n- Name: VARIABLE1\n Value: abc.com\n- Name: VARIABLE2\n Value: test_value\n```\n", "ContainerName": "Specify Name of the Container.", "command": "Specify the command attribute for the container.\n```yaml\n # Example 1\n [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n\n # Example 2\n - /bin/sh\n - -c\n - >-\n echo \"Hello !!!\"\n```\n", "args": "Add argument\n```yaml\n# Example 1\n[\"$(VARIABLE1)\"]\n\n# Example 2\n[\"-c\", \"while true; do echo hello; sleep 10;done\"]\n\n# Example 3\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n```\n", "otherContainerConfig": "Additional Container Configurations. add here.\n```yaml\nimagePullPolicy: Always\nresources: {}\nterminationMessagePath: /dev/termination-log\nterminationMessagePolicy: File\nvolumeMounts:\n- mountPath: /opt\n name: my-volume\n```\n", "AllocationTags": "Allocation tags is the simplest way to constraint containers/pods with hosts/nodes. DuploCloud/Kubernetes Orchestrator will make sure containers will run on the hosts having same allocation tags." }, "batchQueue": { "tags": "Key value pair of tags to be assigned to queues\nSample Value for tags configuration:\n```js\n{\n \"key1\" : \"value1\",\n \"key2\" : \"valuev\"\n}\n```" }, "mq-broker-add": { "Name": "Name of the broker.", "EngineType": "Type of broker engine.", "EngineVersion": "Version of the broker engine.", "DeploymentMode": "Deployment mode of the broker.\nRefer guide for [RabbitMQ](https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/rabbitmq-broker-architecture.html) and [Apache ActiveMQ](https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-broker-architecture.html).\n", "StorageType": "Select StorageType\n 1. **ActiveMQ**: Valid values are EFS and EBS.\n 2. **RabbitMQ**: Only EBS is supported by default. Select supported InstanceType from the list.\n", "Username": "Username of the user.", "password": "Password of the user. Must be 12 to 250 characters long, contain at least 4 unique characters, and must not contain commas.", "BrokerConfiguration": "Select the Broker Configuration.", "CloudWatchLogs": "(Optional) Select to enable general logging via CloudWatch.", "CloudWatchGeneralLogs": "(Optional) Select to enable general logging via CloudWatch.", "CloudWatchAuditLogs": "(Optional) Select to enable audit logging. Only applicable for Engine Type of `ActiveMQ`.", "PublicNetworkAccess": "Select to enable connections from applications outside of the VPC that hosts the broker's subnets.", "Subnet": "Select Subnet in which to launch the broker. `Single Instance` deployment requires one subnet. An `Active/Standy` deployment requires multiple subnets.", "Encryption": "Select Encryption.", "IsMaintenanceWindow": "Select to enable the broker maintenance window.", "StartDay": "Start Day of the week.", "HostInstanceType": "Select supported InstanceType from the list.", "AuthenticationStrategy": "Authentication strategy associated with the configuration. `LDAP` is not supported for RabbitMQ engine type. Click [here for LDAP](https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/security-authentication-authorization.html#ldap-get-started).", "DomainName": "Specify the domain name assigned to the Amazon MQ broker.", "SecondDomainName": "Specify the secondary custom domain name associated with the Amazon MQ broker.", "ServiceAccountUsername": "Specify Service account username.", "ServiceAccountPassword": "Specify Service account password.", "UserBase": "Specify Fully qualified name of the directory where you want to search for users.", "UserSearchMatching": "Search criteria for users.", "RoleBase": "Enter Fully qualified name of the directory to search for a user's groups.", "RoleSearchMatching": "Search criteria for groups.", "UserRoleName": "Name of the LDAP attribute for the user group membership.", "RoleName": "Specify the LDAP attribute used to identify the group name in the result of the group membership query.", "UserSearchSubtree": "Specifies whether the directory search scope for user group includes the entire sub-tree.", "RoleSearchSubtree": "Specifies whether the directory search scope for role includes the entire sub-tree." }, "AddServiceAdvanced": { "VolumeMappingsK8s": "Volumes and Volume mount setting in the Duplo simplied format. If you specify Volumes here, Duplo will take care of configuring pod volumes and volume mounts.\n```yaml\n---\n# Statefulset using EBS volume\n- AccessMode: ReadWriteOnce\n Name: data\n Path: /attachedvolume\n Size: 10Gi\n# Deployment using host directory mount in read-only mode\n- Name: rootfs # Name of the volume\n Path: /rootfs # Path in the container\n ReadOnly: true # Set if it is readonly mount\n Spec: # K8s Defined volumes\n HostPath:\n Path: / # Path on the host\n# Deployment using host directory mount in read-write mode\n- Name: var-run\n Path: /var/run\n Spec:\n HostPath:\n Path: /var/run\n# Deployment mounting secret into directory\n- Name: nginx\n Path: /etc/nginx/conf.d\n Spec:\n Secret:\n SecretName: nginx\n# Deployment mounting config map into directory\n- Name: nginx\n Path: /etc/nginx/conf.d\n Spec:\n ConfigMap:\n Name: nginx\n# Deployment using PersistentVolumeClaim. \n- Name: nginx\n Path: /usr/share/nginx/html\n Spec:\n PersistentVolumeClaim:\n claimName: efs-claim\n```", "PodConfigK8s": "Deployment and Pod confiiguration\n```yaml\nSubdomain: same-name-as-my-service\nLabels:\n label1: values1\nAnnotations:\n seccomp.security.alpha.kubernetes.io/pod: docker/default\n duplocloud.net/external-scaling: \"true\" ## to support external scaling HPA like KEDA\nPodLabels:\n label1: values1\nPodAnnotations:\n seccomp.security.alpha.kubernetes.io/pod: docker/default\nRestartPolicy: Always\nPodSecurityContext:\n RunAsUser: 1000\n RunAsGroup: 3000\n FsGroup: 2000\nVolumes:\n - name: cache-volume\n emptyDir: {}\n - name: config-vol\n configMap:\n name: log-config\n items:\n - key: log_level\n path: log_level\nImagePullSecrets:\n - name: myregistrykey\nServiceAccountName: my-service-account-name\nAutomountServiceAccountToken: true\n# mount secretprovider class volume\nVolumes:\n- Name: secretvolume-name\n Csi:\n driver: secrets-store.csi.k8s.io\n readOnly: true\n VolumeAttributes:\n secretProviderClass: my-secret-provider-class\n# deployment strategy\nDeploymentStrategy:\n RollingUpdate:\n MaxSurge: 1\n MaxUnavailable: 0 \n# add initContainers and additionalContainers\ninitContainers:\n- name: bootstrap\n image: busybox:1.28\n command:\n - sh\n - '-c'\n - echo duplo\nadditionalContainers:\n- name: init-myservice\n image: busybox:1.28\n restartPolicy: Always\n command:\n - sh\n - '-c'\n - ping 127.0.0.1\n```", "ContainerConfigK8s": "Deployment and Pod configurations\n```yaml\nImagePullPolicy: IfNotPresent\nArgs:\n - '-- somearg'\nLivenessProbe:\n httpGet:\n path: /\n port: 80\n httpHeaders:\n - name: Custom-Header\n value: Awesome\n initialDelaySeconds: 3\n periodSeconds: 3\nReadinessProbe:\n exec:\n command:\n - cat\n - /tmp/healthy\n initialDelaySeconds: 5\n periodSeconds: 5\nstartupProbe:\n initialDelaySeconds: 1\n periodSeconds: 5\n timeoutSeconds: 1\n successThreshold: 1\n failureThreshold: 1\n exec:\n command:\n - cat\n - /etc/nginx/nginx.conf\nSecurityContext:\n Capabilities:\n Add:\n - NET_BIND_SERVICE\n Drop:\n - ALL\n ReadOnlyRootFilesystem: false\n RunAsNonRoot: true\n RunAsUser: 1000\nAutomountServiceAccountToken: true\nVolumesMounts:\nEnvFrom:\n- secretRef:\n name: secret_name\n- configMapRef:\n name: configmap-name\n# Mount SecretProvider\nVolumesMounts:\n- Name: volume-name\n MountPath: /mnt/secrets\n readOnly: true\nEnvFrom:\n- SecretRef:\n Name: secretobject-name\n# Set Resource Request and Limits\nresources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n ephemeral-storage: 2Gi\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n ephemeral-storage: 4Gi\n# Pod Toleration example\ntolerations:\n- key: key1\n operator: Equal\n value: value1\n effect: NoSchedule\n- key: example-key\n operator: Exists\n effect: NoExecute\n tolerationSeconds: 6000\n#lifecycle hook sample\nlifecycle:\n postStart:\n exec:\n command:\n - /bin/sh\n - '-c'\n - date > /container-started.txt\n preStop:\n exec:\n command:\n - /usr/sbin/nginx\n - '-s'\n - quit\n# StatefulSet Update Strategy\nStatefulSetUpdateStrategy:\n RollingUpdate:\n Partition: 1\n Type: RollingUpdate\n# dnsPolicy and dnsConfig\n dnsPolicy: None\n dnsConfig:\n nameservers:\n - 8.8.8.8\n - 1.1.1.1\n searches:\n - mycustom.domain.local\n options:\n - name: ndots\n value: '2'\n - name: timeout\n value: '1' \n```", "VolumeMappings": "Example of mounting a host drive into the container\n```js\n\"/home/ubuntu/data:/data\",\"/home/ubuntu/confg:/config\"\n```\n", "otherDockerConfig": "Any custom docker create container can be passed here based\n on the documentation at https://docs.docker.com/engine/api/v1.41/#operation/ContainerCreate\n for example the following config overrides the entrypoint of the container and\n sets a few labels\n```js\n{\n \"Entrypoint\": [\n \"/bin/bash\",\n \"-c\",\n \"sleep 1h\"\n ],\n \"Labels\": {\n \"com.example.vendor\": \"Acme\",\n \"com.example.license\": \"GPL\",\n \"com.example.version\": \"1.0\"\n }\n}\n```" } } } }