content
stringlengths 7
2.61M
|
---|
TRADING PLACES: Instead of potentially joining Brad Marchand and the Bruins, defenseman Ryan McDonagh is headed to the Lightning.
The good news for the Bruins is general manager Don Sweeney clearly made his team better leading up to the trade deadline with the acquisitions of power forward Rick Nash, defenseman Nick Holden and depth forwards Brian Gionta and Tommy Wingels.
The bad news? Just about everyone else that matters in the East did, too.
Let’s start at the top. With about 15 minutes to go before the deadline yesterday, it appeared the B’s had made the biggest splash, acquiring Nash on Sunday to bolster their top six forwards with a player who still can dominate games at times. In the past week, Sweeney also has added depth up front in Gionta and Wingels and on the back end with Holden.
But then the Tampa Bay Lightning, leaders of the Atlantic Division, made the blockbuster deal, landing captain and defenseman Ryan McDonagh and forward J.T. Miller from the New York Rangers for 20-goal scorer Vladislav Namestnikov, a 2018 first-round pick, a conditional second-round pick and prospects Brett Howden and Libor Hajek.
According to a source outside the organization, the B’s tried aggressively to land McDonagh — it’s not known what pieces Sweeney was willing to give up — but in the end, Tampa Bay GM Steve Yzerman combined a contributing player off his roster with a picks and prospects. Sweeney likely wasn’t willing to go that far.
“We all are in the business of trying to improve our teams, either right now or next year. There are 31 teams that are jockeying this time of the year,” said Sweeney, who wouldn’t confirm his involvement in the McDonagh talks.
If it came down to a choice between Nash or McDonagh, Sweeney made the right choice by prioritizing the power forward. To these eyes, adding Nash to and subtracting Ryan Spooner from the forward group is a more significant upgrade than adding McDonagh to the defense corps and subtracting either Torey Krug or Matt Grzelcyk (it wouldn’t have been Zdeno Chara) from the left side. Had McDonagh been a right shot, it would have been a different story, but he’s not.
In theory, it would have been great to get both Nash and McDonagh, but at what cost? The B’s gave up their 2018 first-round pick to get Nash, so it surely would have cost them another young piece from the current lineup, most likely either Danton Heinen or Jake DeBrusk, and a good chunk of their still-fertile farm system. In the end, Sweeney was able to improve his team with Nash — whom Sweeney said is a possibility to be signed long term, though a contract has not yet been discussed — and protect both the future and the team chemistry that has helped the Bruins to be one of the biggest surprises of the NHL this season.
But the Lightning clearly improved themselves.
And they were not the only ones. The Maple Leafs, still the B’s most likely first-round opponent, made what appeared to be a Bruins-specific deal in obtaining former Canadiens forward Tomas Plekanec. He’s given David Krejci fits through the years. In the Metropolitan Division, the two-time defending champion Pittsburgh Penguins filled a specific need by acquiring third-line center Derick Brassard from Ottawa. New Jersey added speed and brawn with Michael Grabner and Patrick Maroon. Columbus added Bruins killer Thomas Vanek for a pittance along with forward Mark Letestu and defenseman Ian Cole.
Judging from the comings and goings, there are many teams thinking the same way the Bruins are thinking — that this NHL season is wide-open, and what the heck, why not us?
There is a little more than a month until the end of the regular season. The rosters are more or less set. Now it is time for the real contenders to separate themselves from the pack.
The Bruins have as good a shot as any team to emerge, but they will have to earn whatever they get. |
<gh_stars>1000+
// Copyright 2019 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "chrome/browser/ui/views/tabs/tab_group_underline.h"
#include <memory>
#include <utility>
#include "chrome/browser/ui/layout_constants.h"
#include "chrome/browser/ui/tabs/tab_style.h"
#include "chrome/browser/ui/views/tabs/tab.h"
#include "chrome/browser/ui/views/tabs/tab_group_header.h"
#include "chrome/browser/ui/views/tabs/tab_group_views.h"
#include "components/tab_groups/tab_group_id.h"
#include "components/tab_groups/tab_group_visual_data.h"
#include "third_party/skia/include/core/SkColor.h"
#include "ui/base/metadata/metadata_impl_macros.h"
#include "ui/gfx/canvas.h"
#include "ui/views/background.h"
#include "ui/views/view.h"
constexpr int TabGroupUnderline::kStrokeThickness;
TabGroupUnderline::TabGroupUnderline(TabGroupViews* tab_group_views,
const tab_groups::TabGroupId& group)
: tab_group_views_(tab_group_views), group_(group) {}
void TabGroupUnderline::OnPaint(gfx::Canvas* canvas) {
SkPath path = GetPath();
cc::PaintFlags flags;
flags.setAntiAlias(true);
flags.setColor(tab_group_views_->GetGroupColor());
flags.setStyle(cc::PaintFlags::kFill_Style);
canvas->DrawPath(path, flags);
}
void TabGroupUnderline::UpdateBounds(const gfx::Rect& group_bounds) {
const int start_x = GetStart(group_bounds);
const int end_x = GetEnd(group_bounds);
// The width may be zero if the group underline and header are initialized at
// the same time, as with tab restore. In this case, don't update the bounds
// and defer to the next paint cycle.
if (end_x <= start_x)
return;
const int y =
group_bounds.height() - GetLayoutConstant(TABSTRIP_TOOLBAR_OVERLAP);
SetBounds(start_x, y - kStrokeThickness, end_x - start_x, kStrokeThickness);
}
// static
int TabGroupUnderline::GetStrokeInset() {
return TabStyle::GetTabOverlap() + kStrokeThickness;
}
int TabGroupUnderline::GetStart(const gfx::Rect& group_bounds) const {
return group_bounds.x() + GetStrokeInset();
}
int TabGroupUnderline::GetEnd(const gfx::Rect& group_bounds) const {
const Tab* last_grouped_tab = tab_group_views_->GetLastTabInGroup();
if (!last_grouped_tab)
return group_bounds.right() - GetStrokeInset();
return group_bounds.right() +
(last_grouped_tab->IsActive() ? kStrokeThickness : -GetStrokeInset());
}
SkPath TabGroupUnderline::GetPath() const {
SkPath path;
path.moveTo(0, kStrokeThickness);
path.arcTo(kStrokeThickness, kStrokeThickness, 0, SkPath::kSmall_ArcSize,
SkPathDirection::kCW, kStrokeThickness, 0);
path.lineTo(width() - kStrokeThickness, 0);
path.arcTo(kStrokeThickness, kStrokeThickness, 0, SkPath::kSmall_ArcSize,
SkPathDirection::kCW, width(), kStrokeThickness);
path.close();
return path;
}
BEGIN_METADATA(TabGroupUnderline, views::View)
END_METADATA
|
The Perforation-Operation time Interval; An Important Mortality Indicator in Peptic Ulcer Perforation. OBJECTIVE To find out the significance of the Perforation-Operation Interval (POI) with respect to an early prognosis, in patients with peritonitis which is caused by peptic ulcer perforation. STUDY DESIGN Case series. Place and Duration of the Study: Department of General Surgery, Konaseema Institute of Medical Sciences and RF Amalapuram, Andhra Pradesh, India from 2008-2011. MATERIALS AND METHOD This study included 150 patients with generalized peritonitis, who were diagnosed to have Perforated Peptic Ulcers (PPUs). The diagnosis of the PPUs was established on the basis of the history, the clinical examination and the radiological findings. The perforation-operation interval was calculated from the time of onset of the symptoms like severe abdominal pain or vomiting till the time the patient was operated. RESULT Out of the 150 patients 134 were males and 16 were females, with a male : female ratio of 9:1. Their ages ranged between 25-70 years. Out of the 150 patients, 65 patients (43.3%) presented within 24 hours of the onset of severe abdominal pain (Group A), 27 patients (18%) presented between 24-48 hours of the onset of severe abdominal pain (Group B) and 58 patients (38.6%) presented after 48 hours. There was no mortality in Group A and the morbidity was more in Group B and Group C. There were 15 deaths in Group C. CONCLUSION The problem of peptic ulcer perforation with its complication, can be decreased by decreasing the perforation -operation time interval, which as per our study, appeared to be the single most important mortality and morbidity indicator in peptic ulcer perforation. |
<reponame>dropofwill/author-attr-experiments
import numpy as np
import os
import time as tm
from sklearn import datasets
from sklearn.cross_validation import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.pipeline import Pipeline
from sklearn.decomposition import RandomizedPCA
docs = datasets.load_files(container_path="../../sklearn_data/problemI")
X, y = docs.data, docs.target
baseline = 1/float(len(list(np.unique(y))))
# Split the dataset into testing and training sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=2)
# define a pipeline combining a text feature extractor/transformer with a classifier
pipeline = Pipeline([
('vect', CountVectorizer(decode_error='ignore', analyzer='char', ngram_range=(1,2))),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB(alpha=0.0001))
])
# features to cross-check
parameters = {
#'vect__max_df': (0.5, 0.75, 1),
#'vect__max_features': (None, 100, 5000),
#'vect__analyzer' : ('char', 'word'),
#'vect__ngram_range': ((1, 1), (1, 2), (2,2), (2,3), (1,3), (1,4), (3,4), (1,5), (4,5), (3,5)),
#'vect__ngram_range': ((1, 1), (1, 2), (1,3)), # unigrams or bigrams or ngrams
#'tfidf__use_idf': (True, False),
#'clf__alpha': (1, 0.5, 0.01, 0.001, 0.0001, 0.00001, 0.000001, 0.0000001),
#'clf__alpha': (0.001, 0.0001, 0.00001, 0.000001)
}
scores = ['precision', 'recall']
sub_dir = "Results/"
location = "results" + tm.strftime("%Y%m%d-%H%M%S") + ".txt"
with open( os.path.join(sub_dir, location), 'w+') as f:
for score in scores:
f.write("%s \n" % score)
clf = GridSearchCV(pipeline, parameters, cv=2, scoring=score, verbose=0)
clf.fit(X_train, y_train)
improvement = (clf.best_score_ - baseline) / baseline
f.write("Best parameters from a %s stand point:\n" % score)
f.write("Best score: %0.3f \n" % clf.best_score_)
f.write("Baseline score: %0.3f \n" % baseline)
f.write("Improved: %0.3f over baseline \n" % improvement)
f.write("\n\nGrid scores from a %s stand point:\n" % score)
for params, mean_score, scores in clf.grid_scores_:
f.write("%0.3f (+/-%0.03f) for %r \n" % (mean_score, scores.std() / 2, params))
f.write("\n\n")
f.write("\n\nDetailed classification report:\n")
f.write("The model is trained on the full development set.\n")
f.write("The scores are computed on the full evaluation set.\n")
y_true, y_pred = y_test, clf.best_estimator_.predict(X_test)
f.write(classification_report(y_true, y_pred)) |
Uncooled ultrasensitive broad-band solution-processed photodetectors (Conference Presentation) Sensing from the ultraviolet (UV)-visible to infrared (IR) is critical to environmental monitoring and remote sensing, fibre-optic communication, day and night-time surveillance, and emerging medical imaging modalities. Today, separate sensors or materials are required for different sub-bands within the UV to IR wavelength range. In general, AlGaN, Si, InGaAs and PbS based photodetectors (PDs) are used for the four important sub-bands: 0.25 m-0.4 m (UV), 0.45 m-0.8 m (visible), 0.9 m-1.7 m (near IR), 1.5 m-2.6 m (middle IR), respectively. To obtain the desired sensitivity, these detectors must be operated at low temperatures (for example, at 4.2 K). Thus, a breakthrough technology would be enabled by a new class of PDs ---- PDs that do not require cooling to obtain high detectivity; PDs which are fabricated by solution-processing to enable low-cost multi-color, high quantum efficiency, high sensitivity and high speed response over this broad spectral range. The availability of such PDs for use at room temperature (RT) would offer new and important applications. In this presentation, we would like to share with you how we approach RT operated ultrasensitive broad-band solution-processed PDs. - By developing novel low bandgap semiconducting polymers, we are able to develop RT operated solution-processed polymers PDs with spectral response from 350 nm to 1450 nm, the detectivity over 1013 Jones and linear dynamic range over 100 dB; spectral response from 350 nm to 2500 nm, the detectivity over 1012 Jones; - By using low bandgap semiconducting polymers mixed with high electrical conductivity PbS quantum dots (QDs), inverted polymer hybrid PDs with spectral response from 300 nm to 25000 nm, the detectivity over 1013 Jones and linear dynamic range over 100 dB are realized; - By using novel perovskite hybrid materials incorporated with carbon nanotubes, novel n-type newly developed semiconducting polymers, we are able to realize RT operated solution-processed PDs with spectral response from 350 nm to 1400 nm, the detectivity over 1012 Jones and linear dynamic range approximatively 100 dB. |
//
// Created by Arseny Tolmachev on 2018/01/19.
//
#ifndef JUMANPP_PARTIAL_EXAMPLE_IO_H
#define JUMANPP_PARTIAL_EXAMPLE_IO_H
#include "core/input/partial_example.h"
#include "util/csv_reader.h"
namespace jumanpp {
namespace core {
namespace input {
class PartialExampleReader {
std::string filename_;
TrainFieldsIndex* tio_;
util::FlatMap<StringPiece, const TrainingExampleField*> fields_;
util::FullyMappedFile file_;
util::CsvReader csv_{'\t', '\0'};
char32_t noBreakToken_ = U'&';
std::vector<chars::InputCodepoint> codepts_;
public:
Status initialize(TrainFieldsIndex* tio, char32_t noBreakToken = U'&');
Status readExample(PartialExample* result, bool* eof);
Status openFile(StringPiece filename);
Status setData(StringPiece data);
char32_t noBreakToken() const { return noBreakToken_; }
};
} // namespace input
} // namespace core
} // namespace jumanpp
#endif // JUMANPP_PARTIAL_EXAMPLE_IO_H
|
def json_reply_error (self, result, message):
try:
msg = {}
msg["result"] = result
msg["message"] = message
return "text/css", json.dumps(msg, indent=4)
except Exception as e:
self.logger.error (ex.stack_trace(e))
msg = {}
msg["result"] = "fail"
msg["message"] = "A fatal exception was encountered while processing your request, check the Indigo log for details"
return "text/css", json.dumps(msg, indent=4) |
import Nav from "./Nav";
import shared from "../styles/shared.module.css";
import colors from "../styles/colors.module.css";
import common from "../styles/common.module.css";
// _____________________________________________________________________________
//
const Component = () => (
<div className={`${common.red} ${colors.green} ${shared.blue}`}>
<h1>Rainbow</h1>
<Nav />
</div>
);
// _____________________________________________________________________________
//
export default Component;
|
Breitling Jet Team tours the U.S.
The Breitling Jet Team, supported by the Swiss watch company Breitling, took in some of the most incredible sights during its first-ever North American Tour this summer.
Performing in the U.S. for the first time this year, the team took the opportunity to see some of the incredible sights our country has to offer, given the bird’s eye views they have from the cockpits of their seven L-39 C Albatros jets.
Flights included a breathtaking pass over the New York City skyline, and a flight to salute the iconic French warship, the Hermione, as well as a flight near Mount Rushmore, the Kennedy Space Station in Florida, Yellowstone National Park and Mt. Rainier in Washington State.
The Brietling Jet Team pass over the replica of the 18th-century French ship Hermione, the ship that brought Marquis de Lafayette to the United States in 1780, as the ship sails up the Patuxent River to visit Annapolis, Md. (Photo by Greg L. Davis/Breitling Jet Team)
The Breitling Jet Team Flies Over Statue of Liberty
Flying in Chicago
Kennedy Space Center
Mt. Rainier
Mt. Rushmore
Over Yellowstone
“It was truly special to get a chance to see such beautiful parts of the American landscape,” Breitling Jet Team leader Jacques Bothelin said. “Especially the Mount Rushmore National Memorial.”
To date, the Breitling Jet Team counts performances in over 38 countries with a combined 51,900 flight hours. The team continues across the country for performances at the Canadian International Air Show Sept. 5-7, and the Fort Worth Alliance Air Show Sept. 12- 13. |
Communications between the engine controller of a motor vehicle and off-board devices are becoming more standardised. This is mainly due to the development of OBD II legislation in California, which has been propagated across the US and Europe and is now being taken on by many other countries. The legislation requires the support of certain standard communications protocols and also the provision of certain standard pieces of data by those protocols. This is intended to allow the vehicle service industry access to information from sensors and actuators on the vehicle such they can make effective and efficient repairs to vehicles. This information can also be accessed by any other monitoring device that might be fitted to the vehicle, and is not restricted to dealer service tools.
However, across the entire fleet of vehicles with differing engine types and configurations, there are relatively few truly “common” pieces of information (eg common parameters exist for engine rpm and engine coolant temperature). Therefore, in practice, many of the parameters are available only as “manufacturer specific” items. This includes not just the parameter identifier (PID), but also any scaling information that might be required to decode it. |
Motor unit potential analysis under submaximal contraction: ADEMG study in normal and radiculopathy subjects. The purpose of this study was to investigate the motor unit potential (MUP) morphology in different degrees of muscle contraction. Fourteen normal volunteers and 8 patients with C7 radiculopathy were included. The extensor digitorum communis muscle was kept in minimal contraction and 5%, 10%, 30% and 50% of maximal contraction under the guidance of strain gauge. Automatic motor unit potential analysis was done by automatic decomposition electromyography for each epoch of muscle contraction in both normal and radiculopathic groups. The results revealed that as muscle contraction increased, the observed MUPs had higher amplitude, rising ratio and firing rate, while the duration became shorter, but no regular change was found in turns reading. These tendencies were found in both normal and radiculopathic groups. This study enhanced the understanding of morphology of MUPs with higher threshold, and proved the usefulness of ADEMG in differentiating normal from cervical chronic radiculopathy condition. It is suggested that the examiner should observe both lower and higher threshold MUPs during clinical needle examination to make diagnosis more accurately. |
<filename>src/main/java/com/turalt/openmentor/restlets/CustomSpringServerServlet.java
package com.turalt.openmentor.restlets;
import org.restlet.Component;
import org.restlet.data.Protocol;
import org.restlet.ext.spring.SpringServerServlet;
public class CustomSpringServerServlet extends SpringServerServlet {
/**
*
*/
private static final long serialVersionUID = 1L;
@Override
protected void init(Component component) {
super.init(component);
// Add the file protocol. We can't ever actually do this from the web.xml
// because Spring means we have a real component to initialize from, and
// ServerServlet only uses the client settings when we are using an implicit
// component. See: http://restlet.com/technical-resources/restlet-framework/javadocs/snapshot/jee/ext/org/restlet/ext/servlet/ServerServlet.html
component.getClients().add(Protocol.FILE);
}
}
|
Concussion care to using game film technology discussed at day-long clinic.
That was the general message to the nearly 100 coaches, teachers, administrators, school nurses, physicians, athletic directors, and athletic trainers who came together at the day-long Management of the Secondary School Athlete symposium, held Wednesday at the NYU Winthrop Research and Academic Center in Mineola.
Guest speakers, which included doctors, trainers, nurses, administrators and coaches from around the state, covered topics from concussion care, to the use of game film in diagnosing injuries, to managing diabetes and allergies in athletes. It was the first edition of what organizers hope will be an annual event.
“After doing this for quite a while and seeing the disconnect between so many different entities that work with student athletes, it gave us the idea to bring all the stakeholders together to make a huge difference in how secondary student athletes are being cared for,” said Stephen Wirth, administrative director of outpatient rehab services and sports medicine at NYU Winthrop Hospital.
On the subject of concussions, former state Senator Kemp Hannon stressed that more research is necessary to adequately care for student athletes.
Garden City High School football coach Dave Ettinger spoke about the use of game film and how it can greatly aid diagnosing and treating injuries within a game. During a Long Island high school football games, plays can be transmitted to an iPad on the sideline using a game film service, Ettinger said.
Ettinger also stressed the importance of having properly-fitting equipment, especially in football.
“Although we won’t be able to get rid of [injuries] completely, having a properly-fitted helmet and having shoulder pads that fit properly can help in keeping athletes safer,” Ettinger said. |
A Registration Scheme for Multispectral Systems Using Phase Correlation and Scale Invariant Feature Matching In the past few years, many multispectral systems which consist of several identical monochrome cameras equipped with different bandpass filters have been developed. However, due to the significant difference in the intensity between different band images, image registration becomes very difficult. Considering the common structural characteristic of the multispectral systems, this paper proposes an effective method for registering different band images. First we use the phase correlation method to calculate the parameters of a coarse-offset relationship between different band images. Then we use the scale invariant feature transform (SIFT) to detect the feature points. For every feature point in a reference image, we can use the coarse-offset parameters to predict the location of its matching point. We only need to compare the feature point in the reference image with the several near feature points from the predicted location instead of the feature points all over the input image. Our experiments show that this method does not only avoid false matches and increase correct matches, but also solve the matching problem between an infrared band image and a visible band image in cases lacking man-made objects. |
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include "devpal_abi_x64.h"
#define DEVICE_FILE_NAME "/dev/devpal"
void pal_execute_out_16_checked(uint16_t address, uint16_t value, uint64_t * error_out)
{
int file_desc;
int status;
struct out_16_operands out_16_ops = {0};
file_desc = open(DEVICE_FILE_NAME, 0);
if (file_desc < 0) {
printf("[libpal] Can't open device file: %s\n", DEVICE_FILE_NAME);
if(error_out) *error_out = -1;
return;
}
out_16_ops.in.address = address;
out_16_ops.in.value = value;
if(ioctl(file_desc, DEVPAL_EXECUTE_OUT_16, &out_16_ops)) {
printf("[libpal] out_16 ioctl failed\n");
if(error_out) *error_out = -1;
return;
}
}
void pal_execute_out_16(uint16_t address, uint16_t value)
{ pal_execute_out_16_checked(address, value, NULL); }
|
IN an investment career that spans more than a quarter-century, G. Kenneth Heebner has compiled one of the mutual fund industry's best records. His CGM Capital Development fund -- which has been closed to new investors for 29 years -- the fourth-longest such period on record -- has a 15-year average annual return of 20 percent. That is nearly three percentage points higher than the Standard and Poor's 500-stock index and more than one-third better than the average growth fund.
Mr. Heebner is not afraid to roll the dice -- he makes big bets on individual stocks, producing portfolios that are up to 75 percent more volatile than the overall market. Despite Capital Development's performance, Morningstar Inc., the fund-tracking firm, gives the fund a rating of only three stars because of its risk. On the other hand, Mr. Heebner's newer fund, CGM Realty, a no-load fund that invests in real estate investment trusts, or REIT's, has earned five stars, Morningstar's highest rating, and is unlikely to close anytime soon.
REIT's trade like stocks and own property or, sometimes, mortgages for commercial, industrial, apartment, hotel and retail buildings. They are intended to generate a steady stream of cash, making them attractive to investors who need current income. Also, traders who expect the stock market to slump often turn to real estate trusts.
As of Nov. 30, the most recent data available, the fund's dividend was 3.9 percent, rather low for a realty fund; the one-year return, as of Friday, was 20.4 percent.
Mr. Heebner's approach toward investing in REIT's is the same as investing in stocks: He looks for situations where he thinks earnings in 12 to 18 months will far exceed expectations.
In the Capital Development fund, for example, he bought Ford shares in the early 1980's, during a recession, and captured big profits later. He bought financial services stocks in 1982 and they surged throughout the decade as interest rates declined.
In his real estate fund, Mr. Heebner owns 22 REIT's. Holdings in the office and industrial sector make up 44 percent of assets, while hotels account for 33 percent.
''The most exciting opportunity in the REIT area today, in my judgment,'' he said, ''is the growth in revenues in the office and industrial sector on existing properties.'' The economic and interest-rate environment is favorable, he said, adding that demand for office space is climbing throughout the nation, especially in the Northeast.
In the hotel industry, high occupancy rates, especially in the full-service category, are allowing some owners to achieve rapid growth in revenue per available room, or rev-par. ''We're looking for companies that will benefit from that,'' Mr. Heebner said.
Lawrence Feldman, chairman and chief executive of Tower Realty Trust, has more than 15 years of experience as a developer, Mr. Heebner said. But the trust, with 3.4 million square feet of office space, went public only in October. The share price of $24.69 on Friday is little changed from the offering price of $25, and Mr. Heebner paid an average of $26.80 for his stake -- one of the biggest in his realty mutual fund.
Tower Realty has two attributes Mr. Heebner finds compelling. The first is size. ''I want the base to be small so successful acquisitions can produce meaningful growth,'' he said. The second is that while less than half the REIT's holdings are in Manhattan, they produce half the revenue. This is because rents are higher in New York than in the Southeast and the Southwest, where Tower also operates, and Mr. Heebner expects them to climb.
''I believe Manhattan is beginning an up cycle, and I think you're going to see several years of significant rent increases,'' he said. He also likes the company's strategy of buying class B buildings and renovating them into class A space to bolster return.
Mr. Heebner estimated that Tower's funds from operations, or F.F.O. -- which are roughly comparable to earnings at an industrial company except that depreciation is added back because real estate tends to appreciate -- were $1.87 a share in 1997 and would climb to $2.25 this year and $2.80 in 1999. He expects the share price to rise from less than 12 times 1997 F.F.O. to 14 times next year's figure, or $39, as other investors recognize its strength.
S. L. Green follows the same strategy as Tower except that its entire portfolio is in Manhattan. And like its rival, it is benefiting from a strengthening local property market as well as from acquisitions on a small base.
Issued at $21, Green's shares were bought by Mr. Heebner at an average cost of $24.30 and traded on Friday at $25.50. Based on his estimates of 1998 F.F.O. of $2.05, rising to $2.55 next year, and an increase in the F.F.O. multiple to 14, from less than 13, Mr. Heebner projects a share price of $36.
Another of CGM Realty's largest stakes is Boykin Lodging, which owns 17 hotels, mostly full-service operations in the Northwest. With relatively few hotel REIT's in favor on Wall Street, Boykin's shares cost the fund an average of $22.60 each, or just nine times last year's estimated F.F.O. of $2.52.
Mr. Heebner expects F.F.O. to grow to $3.40 by 1999, for a multiple of 11 and a projected share price of $37. The shares closed on Friday at $26.56.
Boykin properties include Marriotts and Holiday Inns, in urban areas in Washington, Oregon and California. ''I favor full-service properties, and I favor Western and urban locations,'' Mr. Heebner said.
The company owned only nine properties when it went public in November 1996. Since then, it has added properties and agreed to buy a group of Doubletree hotels, bringing its total properties to 27.
''What has intrigued me is that they have a very small base, and if they're successful with acquisitions they're really going to accelerate growth,'' he said.
Mr. Heebner added that he thought Wall Street analysts underestimated Robert Boykin, the REIT's chairman and chief executive, and the son of its founder, William Boykin. ''They say the son is not that aggressive,'' he said.
Robert Boykin, interviewed separately, noted that he had been president of the company for more than a decade and chief executive since 1992. He acknowledged that the company, whose share price has advanced 21 percent in the last year, has not keep pace with the two best-known hospitality REIT's, Starwood Lodging and Patriot American, which have surged 51 percent and 24 percent, respectively. ''We're probably more conservative,'' he said. |
Theoretical analysis on quantum interference effect in fast-light media We make a systematic theoretical analysis on the quantum interference (QI) effects in various fast-light media (including gain-assisted $N$, gain-assisted ladder-I, and gain-assisted ladder-II atomic systems). We show that such fast-light media are capable of not only completely eliminating the absorption but also suppressing the gain of signal field, and hence provide the possibility to realize a stable propagation of the signal field with a superluminal velocity. We find that there is a destructive (constructive) QI effect in gain-assisted ladder-I (gain-assisted N) system, but no QI in the gain-assisted ladder-II system; furthermore, a crossover from destructive (constructive) QI to Autler-Townes splitting may occur for the gain-assisted ladder-I (gain-assisted N) system when the control field of the system is modulated. Our theoretical analysis can be applied to other multi-level systems, and the results obtained may have promising applications in optical and quantum information processing and transmission. I. INTRODUCTION In the past two decades, much attention has been paid to the study of slow light, which can be realized in various optical media. The most typical system for obtaining slow light is the use of electromagnetically induced transparency (EIT) occurring in a three level -type atomic system interacting with two resonant laser fields. Slow light has many practical applications, including high-capacity communication networks, ultrafast alloptical information processing, precision spectroscopy and precision measurements, quantum computing and quantum information, and so on. However, as pointed out in Ref., EIT-based slow-light scheme has some drawbacks. Two of them are significant signal-field attenuation and spreading and very long response time. Parallel to the study of slow light, in resent years there are also tremendous interest on the investigation of fast light (also called superluminal light). Chu and Wong firstly demonstrated a superluminal propagation of optical wave packet in an absorptive medium. In order to suppress the substantial attenuation of the optical wave packet occurred in the experiment, Chiao proposed to use a gain medium with inverted atomic population to obtain a stable superluminal propagation. Steinberg and Chiao proved that the stable superluminal propagation in the medium with a gain doublet is indeed possible. The works carried out by Wang et al. and Biglow et al., as well as those reported in Refs. [4,, further revealed many intriguing aspects of fast light, including the possibility of obtaining giant Kerr nonlinearity and superluminal optical solitons. Recently, it has been demonstrated that the use of fast-light media can realize quantum phase gates and light and quantum memory. It is well known that the physical mechanism of EIT is the quantum interference (QI) effect contributed by control field, by which the absorption of signal field can be greatly suppressed, i.e. an EIT transparency window is opened in the absorption spectrum of the signal field. Furthermore, the QI also results in a drastic change of dispersion and hence a large reduction of the group velocity of the signal field. In addition, it has been discovered recently that in such systems there exists an interesting crossover from EIT to Autler-Townes splitting (ATS). It is natural to ask the question: Is it possible to have similar phenomena for fast-light media? In this article, we give a definite answer to this question by investigating the absorption spectra of several typical fast-light media, including gain-assisted N (GAN), gain-assisted ladder-I (GAL-I), and gain-assisted ladder-II (GAL-II) atomic systems (Fig. 1). We carry out systematic theoretical analyses and give clear physical explanations on the QI effects occurring in these fast-light media by extending the spectrum-decomposition method (SDM) developed recently for the EIT-ATS crossover of slow light. We show that such fast-light media are capable of not only completely eliminating the absorption but also suppressing the gain of signalmodulate field, and hence provide the possibility to realize a stable long-distance propagation of the signal field with a superluminal velocity. We find that there is a destructive (constructive) QI effect in the GAL-I (GAN) system, but no QI in the GAL-II system; furthermore, a crossover from destructive (constructive) QI to ATS may occur for the GAL-I (GAN) system if the control field of the system is modulated. Our theoretical analysis can be applied to other gain-assisted multi-level systems (e.g. quantum dots, rare-earth ions in crystals, etc.), and the results obtained may have promising applications in optical and quantum information processing and transmission. The remainder of the article is organized as follows. In Sec. II, we present the model and analyze the QI effect in the GAN system. In Sec. III and Sec. IV, we carry out similar analyses and provide related results for the GAL-I and GAL-II systems, respectively. Finally, in Sec. V we give a discussion and a summary of the main results obtained in this work. A. Model and linear dispersion relation We first consider a cold atomic system with the GAN-type level configuration ( Fig. 1(a) ). We assume the pump, signal, and control fields propagate in z direction, the electric field vector acting in the system reads where e l (k l ) is the unit polarization vector (wavenumber) of the electric-field component with the envelope E l (l = p, s, c). Under electric-dipole and rotating-wave approximations, the interaction Hamiltonian of the system interacting with laser fields read, where h.c. represents Hermitian conjugation, p = (p 31 E p )/, s = (p 32 E s )/, and c = (p 42 E c )/ are respectively the half Rabi frequencies of the pump, signal, and control fields, with p jl being the electric-dipole matrix element associated with the transition from state |j to state |l. Under electric-dipole approximation (EDA) and rotating-wave approximation (RWA), the dynamics of the system is governed by the Bloch equation i where is a 44 density matrix in the interaction picture, and is a 44 relaxation matrix describing the spontaneous emission and dephasing. The explicit expression of Eq. is presented in Appendix A. As in Ref., we assume the one-photon detuning ∆ 3 is much larger than all the Rabi frequencies, Doppler broadened line width (resulted by the thermal motion of the atoms), atomic coherence decay rates, and frequency shift induced by the pump and control fields. In this situation, the population keeps mainly in the ground state |1 to guarantee the system working in fast-light regime; furthermore, Doppler effect can be largely suppressed. However, the (remanent) Doppler effect still has influence on the QI property of the GAN and GAL-II systems (but not in the GAL-I system), as shown in Sec. V. From the Maxwell equation where 23 = N a s |p 32 | 2 /(2 0 c) with N a the atomic density. Note that in deriving the above equation we have assumed the signal-field envelope is wide enough in the transverse (i.e. x, y) directions, so that the diffraction term (∂ 2 /∂x 2 + ∂ 2 /∂y 2 ) s can be disregarded. The base state of the system (i.e. the steady-state solution of the Maxwell-Bloch (MB) Eqs. and for s = 0) is Here the meaning of the quantities jl and d jl has been explained in the Appendix A. For large ∆ 3 one has jl ≈ 0, which means that initially the atomic medium is prepared with the population mainly in the ground state state |1. The base state of this system will evolve into a time-dependent state when the weak signal field is switched on. Solving the MB Eqs. and we obtain the solution where F is an envelope (its concrete form is not needed here), = K()z − t, Explicit expressions of the other first-order solutions for jl (j = 3, l = 2) are omitted here. The linear dispersion relation of the system reads The group velocity of the signal field is given by V g = −1. Fig. 2 near = 0 (blue solid line). Interestingly, the single peak will becomes into two peaks if the condition Shown in is satisfied. Illustrated by the red dashed line in Fig. 2(a) is for c = 0.6. In this case the condition is fulfilled and hence a gain doublet (a dip between two gain peaks) appears in the gain spectrum. By increasing c to 3.0, the distance between the two peaks becomes wide and the gain of the signal field is nearly vanishing at = 0 (black dotted-dashed line). The appearance of the gain doublet in the gain spectrum is due to a QI effect in the system, which will be explained in the next subsection. Fig. 2(b) shows the group velocity Re(V g ) of the signal pulse as a function of / when control field c = 1.0. The system parameters used are the same as in Fig. 2(a). We see that Re(V g ) can be smaller than c (subluminal), larger than c and even negative (superluminal). Especially, at = 0 we have Re(V g ) = −0.51 10 −3 c. Hence the atomic system with GAN level configuration is indeed a fast-light medium. In addition to acquire QI and hence the gain doublet in the gain spectrum, the introduction of the control field can also stabilize the propagation of the signal pulse. Shown in is initial pulse width. In this case (weak control field), the signal pulse has a large gain and hence a significant deformation happens during propagation. However, for a large control field ( c = 3.0), the gain is largely suppressed and hence no deformation occurs during the propagation of the signal pulse. In the numerical simulation, we have taken the initial signal pulse with the form s = 0.01, sech( / 0 ). From these results we conclude that the GAN system with a large control field can support a stable propagation of the signal field. B. Crossover from constructive QI to ATS in the GAN system Now we make a detailed analysis on the appearance of gain doublet in the gain spectrum of the signal field shown above by using the SDM developed recently for slow-light media. To this end, we simplify the linear dispersion relation (i.e. Eq. ). Under the condition One can also obtain a similar expression of K() for nonvanishing ∆ 2 and ∆ 4, but it is lengthy and thus omitted here. K() can be written into the form where Rabi frequency). Shown in We see that although for small c there is a destructive interference between −L + and −L −, the interference is too small and hence can be neglected. So the superposition of −L + and −L − is contributed mainly by −L +. As a result, the gain spectrum −Im(K) displays only a single peak (blue solid line). with W = ( 21 + 41 )/2, = 4| c | 2 − ( 21 − 41 ) 2 /2, and g = ( 21 − 41 )/2. The previous two terms in the bracket { } of the above expression (i.e. the two Lorentzian terms) can be thought as the net contribution contributed to the gain resonance from two different Red dashed line is for two Lorentzian terms; black dashed-dotted line is for small constructive QI term; blue solid line is for −Im (K). (d) The "phase diagram" of Im(K) =0 /Im(K) max as a function of | c / ref | illustrating the transition from constructive QI to ATS in the GAN system. Three regions (i.e. constructive QI region, the QI-ATS crossover region, and the ATS region) are separated by two vertical dashed lines. channels corresponding to the two dressed states (i.e. the states |2 and |4 ) created by the control field c. The term proportional to g is clearly a QI term. The QI is controlled by the parameter g and it is destructive (constructive) if g > 0 (g < 0). Since in the GAN system 21 is always much smaller than 41 and g is often negative, thus the QI in the system is a constructive one. for the constructive QI term; the blue solid line is for −Im (K). We see that in this region a gain doublet appears, which is the result of the superposition of the two Lorentzian terms and the QI term. (iii). Large control field region (| c | ≫ ref ): In this case, the gain spectrum is still given by, but the strength of the QI term, g/, is very weak (i.e. g/ ≈ 0). Thus one has Shown in Fig. 3(c) is the gain spectrum as a function of / for | c | = 2.5 ≫ ref. The red dashed line represents the contribution by the sum of the two Lorentzian terms. For illustration, we have also plotted the contribution from the small QI term (omitted in Eq. ), denoted by the black dotted-dashed line. We see that in this case the QI is small and weak constructive. The blue solid line is the curve of −Im (K), which has two resonance peaks at ≈ ± c. Obviously, the phenomenon found in this case belongs to ATS because the window of the gain doublet is mainly due to the contribution by the two Lorentzian terms. From the results given in Fig. 3(b) and Fig. 3(c), we conclude that the GAN system possesses indeed QI effect and allows a crossover from the QI to the ATS. The physical reason for the occurrence of the QI (i.e. the gain doublet in the gain spectrum in Fig. 3(a) ) can be explained as follows. From the Hamiltonian give above we obtain an eigenstate of the system with and c and for small ∆ 2 and ∆ 4, the eigenstate becomes | ≈ |1 − ( p /∆ 3 )|3 + |4, i.e. the atomic state |2 is not involved. Such state is a "dark state" with zero eigenvalue resulted from the quantum interference between the transition paths |3 → |2 and |4 → |2. As a result, the gain is largely suppressed and hence the gain doublet appears in the gain spectrum of the signal field. A. Model and linear dispersion relation We now turn to consider the GAL-I system shown in Fig. 1(b). The differences between the GAL-I system and the GAN system ( Fig. 1(a) ) are that in the GAL-I system the state |4 is below the state |2. It is just these two differences that makes the QI character of the GAL-I system very different from that in the GAN system, as shown below. Under EDA and RWA, in interaction picture the Hamiltonian of the GAL-I system is given with 23 = N a s |p 32 | 2 /(2 0 c). The base state solution of the system can be obtained by using the MB equations The linear dispersion relation of the system reads In Fig. 4(a) we show the gain spectrum of the GAL-I system, i.e. −Im (K), as a function of / and c. States |1 and |3 in the system are assumed to be coupled with a pump field through a two-photon transition (with effective half Rabi frequency p ). When plotting the Increasing c to satisfy the condition | c | 2 > 23 | p | 2 41 /∆ 2 3 (the same as the condition ), a gain doublet appears. The red dashed line in the figure is the result for c = 1.0. When c = 2.5, the gain doublet becomes wide. The occurrence of the gain doublet is due to a QI effect in the system. By inspection of Fig. 2(a) and Fig. 4(a), we find that there are some similar physical characters between them. First, when c is small, Gain spectrum has only a single peak. However, there are some differences between Fig. 2(a) and Fig. 4(a). The most obvious one is that the peak values of −Im(K) in Fig. 2(a) are reduced rapidly when c is increased, but such phenomenon does not occur in Fig. 4(a). One of physical reasons is that in the GAN system ( Fig. 2(a) ) the level |4 is above the level |2 and the large 24 (the spontaneous emission rate from |4 to |2 ) contributes a population to |2 and hence reduces the gain of the signal field significantly. Differently, in the GAL-I system (Fig. 4(a) ) the level |4 is below the level |2 and 42 (the spontaneous emission rate from |2 to |4 ) is small and hence it has no significant influence to the gain spectrum. As a result, the peak values in the gain spectrum of the GAL-I system has no significant change as c is increased. Fig. 4(b) shows the group velocity of the signal pulse as a function of / for c = 1.0. The system parameters used for plotting the figure are the same as those in Fig. 4(a). One sees that Re(V g ) can be smaller and larger than c, and even negative, and hence the GAL-I system is a typical fast-light medium. B. Crossover from destructive QI to ATS in the GAL-I system We now employ the SDM used in Sec. II B to make a detailed analysis on the QI effect in the GAL-I system. For ∆ 3 much larger than ij, ij, p, and c, Eq. is reduced to which can be written as the form Here Rabi frequency. The gain spectrum −Im(K) can be decomposed into three different regions. (i). Weak control field region (| c | < ref ): Eq. can be decomposed into where A ± = ±( ± + i 41 )/( + − − ). Since Re( ± ) = Im(A ± ) = 0, we have with ± = Im( ± ), B ± = A ± ±, and L ± = 23 B ± /( 2 + 2 ± ). Thus the signal-field gain profile comprises two Lorentzian centered at the origin ( = 0). Fig. 5(a) shows the profile of −L +, which is a positive single peak (red dashed line), and −L −, which is a negative single peak (black dash-dotted line). When plotting the figure c = 0.2 is chosen and the other system parameters are the same as in Fig. 4(a). The superposition of −L + and −L − gives the profile of −Im(K) (blue solid line), which has a gain doublet with a gain window opened near = 0. We see that there exists a destructive QI in the GAL-I system, reflecting by the sum of the positive −L + and the negative −L −, very similar to the destructive QI in the ladder-I type EIT system found recently. |4 ). The terms proportional to g are clearly interference ones. The interference is governed by the parameter g and it is destructive (constructive) if g > 0 (g < 0). Because in the GAL-I system 21 is much smaller than 41, g is always positive. Thus the QI induced by the control field is destructive. The red dashed line is the contribution by the sum of the two positive Lorentzian terms. The contribution of very small interference terms are also plotted as the black dotted-dashed line, which is negative and thus destructive. The blue solid line is for −Im (K). Obviously, the phenomenon found in this case belongs to ATS because the gain window is wide and mainly due to the contribution of two Lorentzian terms. In Fig. 5(d) we show the "phase diagram" of the system, which reflects the crossover from the destructive QI effect to the ATS in the GAL-I system, by taking Im(K) =0 /Im(K) max as a function of | c / ref |. We see that the phase diagram can also be divided into three regions, i.e. the destructive QI region (weak control field region), the QI-ATS crossover region (intermediate control field region), and the ATS region (large control field region), similar to those found in EIT systems for slow lights. The physical reason for the occurrence of the QI here (i.e. the gain doublet in the gain spectrum in Fig. 4(a) ) can also be explained by the existence of a dark state in the system. The outcome of such quantum interference is appearance of the gain doublet in the gain spectrum of the signal field. Shown in Fig. 6(a) is the signal-field gain spectrum, −Im(K), as a function of / for different c. One sees that for c = 0 the gain spectrum has only a single peak centered at = 0 (blue solid line); Increasing the value of c to 10.0, a gain doublet (dip) opens around = 0 with the single peak moves to the left (red dashed line); Increasing c further to 20.0, the gain doublet still keeps near at = 0 but the single peak moves further to the left (black dotted-dashed line). When drawing the figure, the four atomic states and the system parameters used are the same as those for the GAL-I system (see the last section). The condition for the appearance of the gain doublet in the GAL-II system is | c | 2 > ∆ 3 42. Comparing Fig. 6(a) with Fig. 2(a) and Fig. 4(a), we find that there are obvious differences between them. First, the gain doublet is not opened near at = 0; Second, the gain doublet is very narrow even for a very large control field. The physical reason is that for the large control field the base state of the GAL-II system has very different property from the GAN and GAL-I systems. In addition, there is no QI effect in the GAL-II system, as shown below. Fig. 6(b) shows the group velocity of the signal field Re(V g ) in the GAL-I system as a function of / for c = 1.0. One sees that, as in the GAN and GAL-I systems, the GAL-II system can also have subluminal and superluminal group velocities. We can also make an analysis of the QI character in the GAL-II system by using the SDM. For ∆ 3 larger than ij, ij, p, c, Eq. reduces to the formK() = /c +, which can be written as with is different that for GAL-I system because now + is purely imaginary, which brings some different features for the GAL-II system. First, the two peaks in the signal-field gain spectrum −Im(K) for non-zero c becomes asymmetric (see the red dashed and black dotted-dashed lines in Fig. 6(a) ) because the no real part exists for + but the real part of − is proportional to | c | 2. This fact also explains why the small peak on the right (the large peak on the left) of the gain dip locates at = 0 (at = −| c | 2 /∆ 3 ). Second, different from the GAN and GAL-I systems, in the GAL-II system − is always complex, so one cannot set up a parameter ref to divide the system into three control field regions. It is easy to show that the gain spectrum of the GAL-II system can be decomposed into − Im(K) = 24 2 with The first two terms in the bracket of Eq. are two Lorentzian terms. The following terms are proportional to g/. By a simple estimation one obtains g/ ≃ − 41 /(2∆ 3 ), which is negligibly small. So the gain spectrum has only two Lorentzian terms and no interference term, we thus conclude that there is no QI in GAL-II system and hence the system displays only an ATS effect. The GAL-II system has additional properties absent in the GAN and GAL-I systems. For example, it can becomes absorptive for a large control field. This point has already shown in Fig. 6(a), where the gain dip opened in the black dotted-dashed line becomes deep enough so that −Im(K) becomes negative. The physical reason is that for a large c the population is mainly in the state |4, resulting in a significant absorption of the signal field. V. DISCUSSION AND SUMMARY From Sec. II to Sec. IV, we have analyzed the QI characters in the GAN, GAL-I, and GAL-II systems. For comparison, in Table I we have summarized the main results obtained for different systems. If in the table there is "Yes" in the same line for both QI and ATS, an QI-ATS crossover also exists in the system. One may question possible influence resulted from Doppler effect resulted from the thermal motion of the atoms, which is omitted in the above discussions. When the Doppler effect is considered, all calculations for the GAN, GAL-I and GAL-II systems can still be carried out. As an example, in Appendix D we present the result for the GAL-I system. The conclusion is that, in the GAL-I system the QI and the crossover from the QI to ATS are nearly the same as in the case without Doppler effect if an experimental geometry of cancelling Doppler effect is adopted. To show the difference between the three fast-light media with Doppler effect, in Fig. 7 we show the gain spectrum −Im(K) of the GAN, GAL-I and GAL-II systems as functions of for c = 5.0 when Doppler broadening is taken into account. In the figure, the red dashed line (blue solid line) is for the GAL-I system with (without) Doppler effect; the green dashed-dotted line is for GAN system with Doppler effect; the purple solid line is for the GAL-II system with Doppler effect, which has been multiplied by 1/30 for display in the figure. We see that for the GAL-I system there is almost no difference between the situations with and without Doppler effect. However, it can be shown that for the GAN and the GAL-II systems, the QI characters with Doppler effect have some differences in comparison with the case without Doppler effect. Especially, when the Doppler effect is present the width of the gain doublet for the GAN system becomes very narrow and the two peaks become asymmetric and are amplified (green dashed-dotted line). In conclusion, in this article we have made a systematic analysis on the QI effect in several fast-light media by using an extended spectrum-decomposition method. We have shown that such fast-light media are capable of not only completely eliminating the absorption but also suppressing the gain of signal field, and hence provide the possibility to realize a stable long-distance propagation of the signal field with a superluminal velocity. We have found that there is a destructive (constructive) QI effect in GAL-I (GAN) system, but no QI in the GAL-II system. We further found that a crossover from destructive (constructive) QI to Autler-Townes splitting may happen for the GAL-I (GAN) system when the control field of the system is manipulated. The fast-light media presented here may have giant Kerr nonlinearities, as demonstrated in Refs. . The fast, all-Optical, zero to continuously controllable phase gates based on such media have been realized experimentally in Ref.. The theoretical method presented here can be applied to other multi-level atoms, or other physical systems (e.g. molecules, quantum dots, nitrogen-valence centers in diamond, and rare-earth ions in crystals, etc.) and the results obtained can help to deepen the understanding of fastlight physics and fast-light spectroscopy and may have promising applications in optical and quantum information processing and transmission, including the enhancement of optical Kerr effect and the realization of light storage and retrieval by means of fast light media. ACKNOWLEDGMENTS with figure (like Fig. 5) can be obtained, which are omitted here. From these results we can acquire the following conclusions: (i) In the weak control field region, −Im(K) is the sum of two Lorentzian terms centered at = 0, which have opposite signs. The superposition of the two Lorentzian terms results in a quantum destructive interference and hence a gain doublet appears in the gain spectrum. (ii) In the strong control field region, −Im(K) can be approximately expressed as a sum of two Lorentzian terms, which however have the same sign, locate at different positions, and far apart each other. Thus in this region the gain spectrum is an ATS one because there is no interference occurring. (iii) In the intermediate control field region, which is the region between the weak and the strong ones, the gain spectrum displays a crossover from the quantum destructive interference to the ATS. In a similar way, one can make similar calculations for the GAN and GAL-II systems. The general conclusions obtained in Sec. III and Sec. IV are not changed when Doppler effect is taken into consideration. However, QI characters in the GAN and GAL-II systems with Doppler effect display differences in comparison with that without Doppler effect, some of which have been described in Fig. 7. |
Ship Energy Efficiency Measures Status and Guidance The mission of ABS is to serve the public interest as well as the needs of our clients by promoting the security of life and property and preserving the natural environment. It is the policy of ABS to be responsive to the individual and collective needs of our clients as well as those of the public at large, to provide quality services in support of our mission, and to provide our services consistent with international standards developed to avoid, reduce or control pollution to the environment. All of our client commitments, supporting actions, and services delivered must be recognized as expressions of quality. We pledge to monitor our performance as an ongoing activity and to strive for continuous improvement. We commit to operate consistent with applicable environmental legislation and regulations and to provide a framework for establishing and reviewing environmental objectives and targets. T his Advisory has been compiled to provide useful information on the status and the current state of ship energy efficiency measures. It provides guidance to owners and operators on the wide range of options being promoted to improve vessel efficiency, reduce fuel consumption and lower emissions. Included is background information, descriptions of the technologies, explanations of key issues, general pros/cons of each measure and limits of applicability or effectiveness, as well as practical issues related to implementation. The material is presented in five sections: Sections 1 and 3 address challenges for new vessel construction; Sections 2, 4 and 5 cover both new and existing vessels. Section 1: Hull Form Optimization This section addresses issues related to the basic hull form design including selecting proper proportions, reducing resistance by optimizing the hull form and appendage design, and assessing the impact on resistance of waves and wind. There is also a discussion of how the IMO Energy Efficiency Design Index (EEDI) influences ship design and efficiency. Section 2: Energy-saving Devices This section covers devices used to correct or improve the efficiency of propellers as well as developing technologies aimed at reducing the hull frictional resistance or using renewable energy sources (such as solar and wind energy). Section 3: Structural Optimization and Light Weight Construction This section addresses the impact of the use of high strength steel on lightship weight and energy consumption. Section 4: Machinery Technology This section looks at the efficiency gains that are possible in the design and operation of the ship' s machinery and systems. |
package runner
import (
"path"
)
func (r *Runner) DeleteModel(id string) (err error) {
dir := path.Join(r.Conf.ModelDir, id)
return DeleteDirectory(dir)
}
func (r *Runner) DeleteTensorboard(id string) (err error) {
dir := path.Join(r.Conf.TensorboardDir, id)
return DeleteDirectory(dir)
}
func (r *Runner) DeleteLog(id string) (err error) {
filePath := path.Join(r.Conf.JobLogDir, id)
return DeleteFile(filePath)
}
func (r *Runner) DeleteEval(id string) (err error) {
filePath := path.Join(r.Conf.EvalDir, id)
return DeleteFile(filePath)
}
func (r *Runner) DeleteInfer(id string) (err error) {
filePath := path.Join(r.Conf.InferDir, id)
return DeleteFile(filePath)
}
func (r *Runner) DeleteShell(id string) (err error) {
filePath := path.Join(r.Conf.JobShellDir, id)
return DeleteFile(filePath)
}
func (r *Runner) DeleteProgressBar(id string) (err error) {
filePath := path.Join(r.Conf.ProgressBarDir, id)
return DeleteFile(filePath)
}
func (r *Runner) Clean(id string) {
r.DeleteModel(id)
r.DeleteTensorboard(id)
r.DeleteLog(id)
r.DeleteEval(id)
r.DeleteInfer(id)
r.DeleteShell(id)
r.DeleteProgressBar(id)
}
|
package appwrap
import (
"bytes"
"errors"
"fmt"
"io"
"os"
"os/signal"
"reflect"
"runtime"
"strconv"
"sync"
"time"
cloudms "cloud.google.com/go/redis/apiv1"
xxhash "github.com/cespare/xxhash/v2"
redis "github.com/go-redis/redis/v8"
gax "github.com/googleapis/gax-go/v2"
"github.com/pendo-io/appwrap/internal/metrics"
"go.opencensus.io/tag"
"go.opencensus.io/trace"
"golang.org/x/net/context"
redispb "google.golang.org/genproto/googleapis/cloud/redis/v1"
)
type redisAPIConnectorFn func(ctx context.Context) (redisAPIService, error)
func NewRedisAPIService(ctx context.Context) (redisAPIService, error) {
return cloudms.NewCloudRedisClient(ctx)
}
// redisAPIService captures the behavior of *redispb.CloudRedisClient, to make it mockable to testing.
type redisAPIService interface {
io.Closer
FailoverInstance(ctx context.Context, req *redispb.FailoverInstanceRequest, opts ...gax.CallOption) (*cloudms.FailoverInstanceOperation, error)
GetInstance(context.Context, *redispb.GetInstanceRequest, ...gax.CallOption) (*redispb.Instance, error)
}
// Implements needed redis methods for mocking purposes. See *redis.Client for a full list of available methods
// Implementations of these methods convert the returned redis Cmd objects into mockable data by calling
// Err(), Result(), etc.
type redisCommonInterface interface {
Del(ctx context.Context, keys ...string) error
Exists(ctx context.Context, keys ...string) (int64, error)
FlushAll(ctx context.Context) error
FlushAllAsync(ctx context.Context) error
Get(ctx context.Context, key string) ([]byte, error)
IncrBy(ctx context.Context, key string, value int64) (int64, error)
MGet(ctx context.Context, keys ...string) ([]interface{}, error)
Set(ctx context.Context, key string, value interface{}, expiration time.Duration) error
SetNX(ctx context.Context, key string, value interface{}, expiration time.Duration) (bool, error)
TxPipeline() redisPipelineInterface
}
// Additionally implements Watch for transactions
type redisClientInterface interface {
redisCommonInterface
PoolStats() *redis.PoolStats
Watch(ctx context.Context, fn func(*redis.Tx) error, keys ...string) error
}
type redisClientImplementation struct {
// common is used for all methods defined on redisCommonInterface
common redis.Cmdable
// client is used for the redisClientInterface-specific methods
client *redis.Client
}
func (rci *redisClientImplementation) Del(ctx context.Context, keys ...string) error {
return rci.common.Del(ctx, keys...).Err()
}
func (rci *redisClientImplementation) Exists(ctx context.Context, keys ...string) (int64, error) {
return rci.common.Exists(ctx, keys...).Result()
}
func (rci *redisClientImplementation) FlushAll(ctx context.Context) error {
return rci.common.FlushAll(ctx).Err()
}
func (rci *redisClientImplementation) FlushAllAsync(ctx context.Context) error {
return rci.common.FlushAllAsync(ctx).Err()
}
func (rci *redisClientImplementation) Get(ctx context.Context, key string) ([]byte, error) {
return rci.common.Get(ctx, key).Bytes()
}
func (rci *redisClientImplementation) IncrBy(ctx context.Context, key string, value int64) (int64, error) {
return rci.common.IncrBy(ctx, key, value).Result()
}
func (rci *redisClientImplementation) MGet(ctx context.Context, keys ...string) ([]interface{}, error) {
return rci.common.MGet(ctx, keys...).Result()
}
func (rci *redisClientImplementation) Set(ctx context.Context, key string, value interface{}, expiration time.Duration) error {
return rci.common.Set(ctx, key, value, expiration).Err()
}
func (rci *redisClientImplementation) SetNX(ctx context.Context, key string, value interface{}, expiration time.Duration) (bool, error) {
return rci.common.SetNX(ctx, key, value, expiration).Result()
}
func (rci *redisClientImplementation) TxPipeline() redisPipelineInterface {
return &redisPipelineImplementation{rci.common.TxPipeline()}
}
func (rci *redisClientImplementation) PoolStats() *redis.PoolStats {
return rci.client.PoolStats()
}
// Watch can only be called by the top-level redis Client. In particular, this means that
// *redis.TX cannot call Watch again - it only implements redis.Cmdable.
func (rci *redisClientImplementation) Watch(ctx context.Context, fn func(*redis.Tx) error, keys ...string) error {
return rci.client.Watch(ctx, fn, keys...)
}
// Implements needed redis pipeline methods for mocking purposes. See redis.Pipeliner for all available methods.
type redisPipelineInterface interface {
Exec(ctx context.Context) ([]redis.Cmder, error)
IncrBy(ctx context.Context, key string, value int64)
Set(ctx context.Context, key string, value interface{}, expiration time.Duration)
SetNX(ctx context.Context, key string, value interface{}, expiration time.Duration)
}
type redisPipelineImplementation struct {
pipeline redis.Pipeliner
}
func (rpi *redisPipelineImplementation) Exec(ctx context.Context) ([]redis.Cmder, error) {
return rpi.pipeline.Exec(ctx)
}
func (rpi *redisPipelineImplementation) IncrBy(ctx context.Context, key string, value int64) {
rpi.pipeline.IncrBy(ctx, key, value)
}
func (rpi *redisPipelineImplementation) Set(ctx context.Context, key string, value interface{}, expiration time.Duration) {
rpi.pipeline.Set(ctx, key, value, expiration)
}
func (rpi *redisPipelineImplementation) SetNX(ctx context.Context, key string, value interface{}, expiration time.Duration) {
rpi.pipeline.SetNX(ctx, key, value, expiration)
}
// Allows mocking of IntCmds returned by redis calls
type intCmdInterface interface {
Val() int64
}
type boolCmdInterface interface {
Result() (bool, error)
}
type Memorystore struct {
c context.Context
clients []redisClientInterface
namespace string
keyHashFn func(key string, shardCount int) int
}
type memorystoreService struct {
connectFn redisAPIConnectorFn // if nil, use "real" implementation NewRedisAPIService; non-nil used for testing
mtx sync.Mutex
clients *[]redisClientInterface
addrs []string
addrLastErr error
addrDontRetryUntil time.Time
statReporterOnce sync.Once
}
var GlobalService memorystoreService
func InitializeRedisAddrs(addrs []string) {
if len(addrs) == 0 {
return
}
GlobalService.mtx.Lock()
defer GlobalService.mtx.Unlock()
if !reflect.DeepEqual(GlobalService.addrs, addrs) {
GlobalService.addrs = addrs
GlobalService.clients = nil
}
}
const redisErrorDontRetryInterval = 5 * time.Second
func (ms *memorystoreService) getRedisAddr(c context.Context, appInfo AppengineInfo, loc CacheLocation, name CacheName, shards CacheShards) (_ []string, finalErr error) {
if ms.addrs != nil && ms.addrLastErr == nil {
return ms.addrs, nil
}
// Handle don't-retry interval: repeat prior error if too soon after failure
now := time.Now()
if ms.addrLastErr != nil && now.Before(ms.addrDontRetryUntil) {
return nil, fmt.Errorf("cached error (no retry for %s): %s", ms.addrDontRetryUntil.Sub(now), ms.addrLastErr)
}
defer func() {
if finalErr != nil {
ms.addrLastErr, ms.addrDontRetryUntil = finalErr, now.Add(redisErrorDontRetryInterval)
}
}()
connectFn := ms.connectFn
if connectFn == nil {
connectFn = NewRedisAPIService
}
client, err := connectFn(context.Background())
if err != nil {
return nil, err
}
defer client.Close()
projectId := appInfo.NativeProjectID()
if ms.addrs == nil {
ms.addrs = make([]string, shards)
}
for shard, existingAddr := range ms.addrs {
if existingAddr != "" {
continue // skip already-successful addresses
}
instance, err := client.GetInstance(c, &redispb.GetInstanceRequest{
Name: fmt.Sprintf("projects/%s/locations/%s/instances/%s-%d", projectId, loc, name, shard),
})
if err != nil {
finalErr = err
continue // skip failed address gets and keep trying to cache others (but consider the overall lookup failed)
}
ms.addrs[shard] = fmt.Sprintf("%s:%d", instance.Host, instance.Port)
}
if finalErr != nil {
return nil, finalErr
}
return ms.addrs, nil
}
func NewMemorystore(c context.Context, appInfo AppengineInfo, loc CacheLocation, name CacheName, shards CacheShards) (Memcache, error) {
return GlobalService.NewMemorystore(c, appInfo, loc, name, shards)
}
func NewRateLimitedMemorystore(c context.Context, appInfo AppengineInfo, loc CacheLocation, name CacheName, shards CacheShards, log Logging, createLimiters func(shard int, log Logging) redis.Limiter) (Memcache, error) {
return GlobalService.NewRateLimitedMemorystore(c, appInfo, loc, name, shards, log, createLimiters)
}
func (ms *memorystoreService) NewMemorystore(c context.Context, appInfo AppengineInfo, loc CacheLocation, name CacheName, shards CacheShards) (Memcache, error) {
return ms.NewRateLimitedMemorystore(c, appInfo, loc, name, shards, nil, nil)
}
func (ms *memorystoreService) NewRateLimitedMemorystore(c context.Context, appInfo AppengineInfo, loc CacheLocation, name CacheName, shards CacheShards, log Logging, createLimiter func(shard int, log Logging) redis.Limiter) (Memcache, error) {
// We don't use sync.Once here because we do actually want to execute the long path again in case of failures to initialize.
ourClients := ms.clients
if ourClients == nil {
ms.mtx.Lock()
defer ms.mtx.Unlock()
// Check again, because another goroutine could have beaten us here while we were checking the first time
if ms.clients == nil {
if shards == 0 {
panic("cannot use Memorystore with zero shards")
}
clients := make([]redisClientInterface, shards)
addrs, err := ms.getRedisAddr(c, appInfo, loc, name, shards)
if err != nil {
return nil, err
}
rateLimitersProvided := createLimiter != nil && log != nil
for i := range addrs {
ops := &redis.Options{
Addr: addrs[i],
Password: "",
DB: 0,
// Do not ever use internal retries; let the user of this
// library deal with retrying themselves if they see fit.
MaxRetries: -1,
// These are set by environment variable; see the init() function.
IdleTimeout: memorystoreIdleTimeout,
PoolSize: memorystorePoolSize,
PoolTimeout: memorystorePoolTimeout,
ReadTimeout: memorystoreReadTimeout,
}
if ops.PoolSize == 0 {
ops.PoolSize = 4 * runtime.GOMAXPROCS(0)
}
if rateLimitersProvided {
ops.Limiter = createLimiter(i, log)
}
shard := i
ipaddr := addrs[i]
ops.OnConnect = func(ctx context.Context, cn *redis.Conn) error {
log := NewStackdriverLogging(ctx)
log.Infof("memorystore: created new connection to shard %d (%s)", shard, ipaddr)
return nil
}
client := redis.NewClient(ops)
clients[i] = &redisClientImplementation{client, client}
}
ms.clients = &clients
}
ourClients = ms.clients
}
statInterval := metrics.GetMetricsRecordingInterval()
if statInterval > 0 {
ms.statReporterOnce.Do(func() {
fmt.Fprintf(os.Stderr, "[memorystoreService] stat reporter starting (reporting every %s)\n", statInterval)
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, os.Interrupt)
go func() {
ticker := time.NewTicker(statInterval)
defer func() {
ticker.Stop()
}()
for {
select {
case <-ticker.C:
ms.logPoolStats()
case <-sigCh:
fmt.Fprintln(os.Stderr, "[memorystoreService] interrupt received, stopping stat reporter")
return
}
}
}()
})
}
return Memorystore{c, *ourClients, "", defaultKeyHashFn}, nil
}
func (ms *memorystoreService) logPoolStats() {
ms.mtx.Lock()
defer ms.mtx.Unlock()
if ms.clients == nil {
return
}
for i, client := range *ms.clients {
pstats := client.PoolStats()
// These metrics are all for the same connection shard
mctx, err := tag.New(context.Background(), tag.Insert(metrics.KeyConnectionShard, strconv.Itoa(i)))
if err != nil {
fmt.Fprintf(os.Stderr, "Failed to create context with tag: "+err.Error())
continue
}
// Pool usage stats
metrics.RecordWithTagName(mctx, metrics.MMemoryStoreConnectionPoolUsage.M(int64(pstats.Hits)),
metrics.KeyPoolUsageResult, metrics.ConnectionPoolUsageResultHit)
metrics.RecordWithTagName(mctx, metrics.MMemoryStoreConnectionPoolUsage.M(int64(pstats.Misses)),
metrics.KeyPoolUsageResult, metrics.ConnectionPoolUsageResultMiss)
metrics.RecordWithTagName(mctx, metrics.MMemoryStoreConnectionPoolUsage.M(int64(pstats.Timeouts)),
metrics.KeyPoolUsageResult, metrics.ConnectionPoolUsageResultTimeout)
// Connection state stats
metrics.RecordWithTagName(mctx, metrics.MMemoryStoreConnectionPoolConnections.M(int64(pstats.TotalConns-pstats.IdleConns)),
metrics.KeyPoolConnState, metrics.ConnectionPoolConnectionStateActive)
metrics.RecordWithTagName(mctx, metrics.MMemoryStoreConnectionPoolConnections.M(int64(pstats.IdleConns)),
metrics.KeyPoolConnState, metrics.ConnectionPoolConnectionStateIdle)
}
}
func (ms Memorystore) shardedNamespacedKeysForItems(items []*CacheItem) (namespacedKeys [][]string, originalPositions map[string]int, singleShard int) {
keys := make([]string, len(items))
for i, item := range items {
keys[i] = item.Key
}
return ms.shardedNamespacedKeys(keys)
}
func (ms Memorystore) shardedNamespacedKeys(keys []string) (namespacedKeys [][]string, originalPositions map[string]int, singleShard int) {
namespacedKeys = make([][]string, len(ms.clients))
originalPositions = make(map[string]int, len(keys))
singleShard = -1
for i, key := range keys {
namespacedKey, shard := ms.namespacedKeyAndShard(key)
if i > 0 && singleShard != shard {
singleShard = -1
} else {
singleShard = shard
}
namespacedKeys[shard] = append(namespacedKeys[shard], namespacedKey)
originalPositions[namespacedKey] = i
}
return namespacedKeys, originalPositions, singleShard
}
func (ms Memorystore) namespacedKeyAndShard(key string) (string, int) {
if key == "" {
panic("redis: blank key")
}
namespacedKey := ms.namespace + ":" + key
shard := ms.keyHashFn(namespacedKey, len(ms.clients))
return namespacedKey, shard
}
func (ms Memorystore) Add(item *CacheItem) error {
fullKey, shard := ms.namespacedKeyAndShard(item.Key)
c, span := trace.StartSpan(ms.c, traceMemorystoreAdd)
defer span.End()
span.AddAttributes(trace.StringAttribute(traceLabelKey, fullKey))
span.AddAttributes(trace.Int64Attribute(traceLabelShard, int64(shard)))
if added, err := ms.clients[shard].SetNX(c, fullKey, item.Value, item.Expiration); err != nil {
return err
} else if !added {
return CacheErrNotStored
}
return nil
}
func (ms Memorystore) AddMulti(items []*CacheItem) error {
c, span := trace.StartSpan(ms.c, traceMemorystoreAddMulti)
defer span.End()
span.AddAttributes(trace.Int64Attribute(traceLabelNumKeys, int64(len(items))))
addMultiForShard := func(shard int, itemIndices map[string]int, shardKeys []string) ([]redis.Cmder, error) {
if len(shardKeys) == 0 {
return nil, nil
}
c, span := trace.StartSpan(c, traceMemorystoreAddMultiShard)
defer span.End()
span.AddAttributes(trace.StringAttribute(traceLabelFirstKey, shardKeys[0]))
span.AddAttributes(trace.Int64Attribute(traceLabelNumKeys, int64(len(shardKeys))))
span.AddAttributes(trace.Int64Attribute(traceLabelShard, int64(shard)))
pipe := ms.clients[shard].TxPipeline()
for _, key := range shardKeys {
item := items[itemIndices[key]]
pipe.SetNX(c, key, item.Value, item.Expiration)
}
return pipe.Exec(c)
}
handleReturn := func(shard int, itemIndices map[string]int, shardKeys []string, shardResults []redis.Cmder, errList []error) bool {
haveErrors := false
for i, result := range shardResults {
if err := result.Err(); err != nil {
errList[itemIndices[shardKeys[i]]] = err
haveErrors = true
} else if added, _ := result.(boolCmdInterface).Result(); !added {
errList[itemIndices[shardKeys[i]]] = CacheErrNotStored
haveErrors = true
}
}
return haveErrors
}
namespacedKeys, itemIndices, singleShard := ms.shardedNamespacedKeysForItems(items)
errList := make(MultiError, len(items))
if singleShard >= 0 {
results, err := addMultiForShard(singleShard, itemIndices, namespacedKeys[singleShard])
if err != nil {
return err
}
if handleReturn(singleShard, itemIndices, namespacedKeys[singleShard], results, errList) {
return errList
}
return nil
}
results := make([][]redis.Cmder, len(ms.clients))
wg := sync.WaitGroup{}
errs := make(chan error, len(ms.clients))
for shard := 0; shard < len(ms.clients); shard++ {
if len(namespacedKeys[shard]) == 0 {
continue
}
shard := shard
wg.Add(1)
go func() {
defer wg.Done()
shardKeys := namespacedKeys[shard]
res, err := addMultiForShard(shard, itemIndices, shardKeys)
if err != nil {
errs <- err
}
results[shard] = res
}()
}
wg.Wait()
select {
case err := <-errs:
return err
default:
}
haveErrors := false
for shard, shardResults := range results {
newErrors := handleReturn(shard, itemIndices, namespacedKeys[shard], shardResults, errList)
haveErrors = haveErrors || newErrors
}
if haveErrors {
return errList
}
return nil
}
func (ms Memorystore) CompareAndSwap(item *CacheItem) error {
fullKey, shard := ms.namespacedKeyAndShard(item.Key)
c, span := trace.StartSpan(ms.c, traceMemorystoreCAS)
defer span.End()
span.AddAttributes(trace.StringAttribute(traceLabelFirstKey, fullKey))
span.AddAttributes(trace.Int64Attribute(traceLabelShard, int64(shard)))
if err := ms.clients[shard].Watch(c, func(tx *redis.Tx) error {
// Watch is an optimistic lock
txClient := &redisClientImplementation{tx, nil}
return ms.doCompareAndSwap(c, item, txClient, fullKey)
}, fullKey); err == redis.TxFailedErr {
return CacheErrCASConflict
} else {
return err
}
}
func (ms Memorystore) doCompareAndSwap(c context.Context, item *CacheItem, tx redisCommonInterface, fullKey string) error {
val, err := tx.Get(c, fullKey)
if err == redis.Nil {
// Does item exist? If not, can't swap it
return CacheErrNotStored
} else if err != nil {
return err
} else if !bytes.Equal(val, item.valueOnLastGet) {
// Did something change before we entered the transaction?
// If so, already too late to swap
return CacheErrCASConflict
}
// Finally, attempt the swap. This will fail if something else beats us there, since we're in a transaction
// This extends the TTL of the item
// The set will succeed even if the item has expired since we entered WATCH
pipe := tx.TxPipeline()
pipe.Set(c, fullKey, item.Value, item.Expiration)
_, err = pipe.Exec(c)
return err
}
// This (and DeleteMulti) doesn't return ErrCacheMiss if the key doesn't exist
// However, every caller of this never used that error for anything useful
func (ms Memorystore) Delete(key string) error {
return ms.DeleteMulti([]string{key})
}
func (ms Memorystore) DeleteMulti(keys []string) error {
c, span := trace.StartSpan(ms.c, traceMemorystoreDeleteMulti)
defer span.End()
span.AddAttributes(trace.Int64Attribute(traceLabelNumKeys, int64(len(keys))))
namespacedKeys, _, _ := ms.shardedNamespacedKeys(keys)
errList := make(MultiError, 0, len(ms.clients))
haveErrors := false
for i, client := range ms.clients {
shardKeys := namespacedKeys[i]
if len(shardKeys) == 0 {
continue
}
func() {
c, span := trace.StartSpan(c, traceMemorystoreDeleteMultiShard)
defer span.End()
span.AddAttributes(trace.StringAttribute(traceLabelFirstKey, shardKeys[0]))
span.AddAttributes(trace.Int64Attribute(traceLabelNumKeys, int64(len(shardKeys))))
span.AddAttributes(trace.Int64Attribute(traceLabelShard, int64(i)))
if err := client.Del(c, shardKeys...); err != nil {
errList = append(errList, err)
haveErrors = true
}
}()
}
if haveErrors {
return errList
}
return nil
}
func (ms Memorystore) Flush() error {
return errors.New("please don't call this on memorystore")
/*
Leaving this here to show how you implement flush. It is currently disabled because flush brings down memorystore for the duration of this operation.
errs := make([]error, 0, len(ms.clients))
for _, client := range ms.clients {
if err := client.FlushAll(); err != nil {
errs = append(errs, err)
}
}
if len(errs) == 0 {
return nil
} else {
return MultiError(errs)
}
*/
}
func (ms Memorystore) FlushShard(shard int) error {
if shard < 0 || shard >= len(ms.clients) {
return fmt.Errorf("shard must be in range [0, %d), got %d", len(ms.clients), shard)
}
return ms.clients[shard].FlushAllAsync(ms.c)
}
func (ms Memorystore) Get(key string) (*CacheItem, error) {
c, span := trace.StartSpan(ms.c, traceMemorystoreGet)
defer span.End()
fullKey, shard := ms.namespacedKeyAndShard(key)
span.AddAttributes(trace.StringAttribute(traceLabelKey, fullKey))
span.AddAttributes(trace.Int64Attribute(traceLabelShard, int64(shard)))
if val, err := ms.clients[shard].Get(c, fullKey); err == redis.Nil {
return nil, ErrCacheMiss
} else if err != nil {
return nil, err
} else {
valCopy := make([]byte, len(val))
copy(valCopy, val)
return &CacheItem{
Key: key,
Value: val,
valueOnLastGet: valCopy,
}, nil
}
}
func (ms Memorystore) GetMulti(keys []string) (map[string]*CacheItem, error) {
c, span := trace.StartSpan(ms.c, traceMemorystoreGetMulti)
defer span.End()
span.AddAttributes(trace.Int64Attribute(traceLabelNumKeys, int64(len(keys))))
getMultiForShard := func(shard int, itemIndices map[string]int, shardKeys []string) ([]interface{}, error) {
if len(shardKeys) == 0 {
return nil, nil
}
c, span := trace.StartSpan(c, traceMemorystoreGetMultiShard)
defer span.End()
span.AddAttributes(trace.StringAttribute(traceLabelFirstKey, shardKeys[0]))
span.AddAttributes(trace.Int64Attribute(traceLabelNumKeys, int64(len(shardKeys))))
span.AddAttributes(trace.Int64Attribute(traceLabelShard, int64(shard)))
return ms.clients[shard].MGet(c, shardKeys...)
}
handleReturn := func(shard int, itemIndices map[string]int, shardKeys []string, shardVals []interface{}, results map[string]*CacheItem) {
for i, val := range shardVals {
if val == nil {
// Not found
continue
}
valBytes := ms.convertToByteSlice(val)
valCopy := make([]byte, len(valBytes))
copy(valCopy, valBytes)
key := keys[itemIndices[shardKeys[i]]]
results[key] = &CacheItem{
Key: key,
Value: valBytes,
valueOnLastGet: valCopy,
}
}
}
namespacedKeys, keyIndices, singleShard := ms.shardedNamespacedKeys(keys)
// Fast path (no goroutine) if only one shard is involved
if singleShard >= 0 {
results := make(map[string]*CacheItem, len(keys))
vals, err := getMultiForShard(singleShard, keyIndices, namespacedKeys[singleShard])
if err != nil {
return nil, err
}
handleReturn(singleShard, keyIndices, namespacedKeys[singleShard], vals, results)
return results, nil
}
returnVals := make([][]interface{}, len(ms.clients))
wg := sync.WaitGroup{}
haveErrors := false
finalErr := make(MultiError, len(ms.clients))
for shard := 0; shard < len(ms.clients); shard++ {
shardKeys := namespacedKeys[shard]
if len(shardKeys) == 0 {
continue
}
wg.Add(1)
shard := shard
go func() {
defer wg.Done()
vals, err := getMultiForShard(shard, keyIndices, shardKeys)
returnVals[shard] = vals
if err != nil {
finalErr[shard] = err
haveErrors = true
}
}()
}
wg.Wait()
results := make(map[string]*CacheItem, len(keys))
for shard, shardVals := range returnVals {
handleReturn(shard, keyIndices, namespacedKeys[shard], shardVals, results)
}
if haveErrors {
return results, finalErr
} else {
return results, nil
}
}
func (ms Memorystore) convertToByteSlice(v interface{}) []byte {
switch v.(type) {
case string:
return []byte(v.(string))
case []byte:
return v.([]byte)
default:
panic(fmt.Sprintf("unsupported type for convert: %T, %+v", v, v))
}
}
func (ms Memorystore) Increment(key string, amount int64, initialValue uint64) (incr uint64, err error) {
fullKey, shard := ms.namespacedKeyAndShard(key)
c, span := trace.StartSpan(ms.c, traceMemorystoreIncr)
defer span.End()
span.AddAttributes(trace.StringAttribute(traceLabelKey, fullKey))
span.AddAttributes(trace.Int64Attribute(traceLabelShard, int64(shard)))
pipe := ms.clients[shard].TxPipeline()
pipe.SetNX(c, fullKey, initialValue, time.Duration(0))
pipe.IncrBy(c, fullKey, amount)
var res []redis.Cmder
if res, err = pipe.Exec(c); err == nil {
incr = uint64(res[1].(intCmdInterface).Val())
}
return incr, err
}
func (ms Memorystore) IncrementExisting(key string, amount int64) (uint64, error) {
fullKey, shard := ms.namespacedKeyAndShard(key)
c, span := trace.StartSpan(ms.c, traceMemorystoreIncrExisting)
defer span.End()
span.AddAttributes(trace.StringAttribute(traceLabelKey, fullKey))
span.AddAttributes(trace.Int64Attribute(traceLabelShard, int64(shard)))
if res, err := ms.clients[shard].Exists(c, fullKey); err == nil && res == 1 {
val, err := ms.clients[shard].IncrBy(c, fullKey, amount)
return uint64(val), err
} else if err != nil {
return 0, err
} else {
return 0, ErrCacheMiss
}
}
func (ms Memorystore) Set(item *CacheItem) error {
fullKey, shard := ms.namespacedKeyAndShard(item.Key)
c, span := trace.StartSpan(ms.c, traceMemorystoreSet)
defer span.End()
span.AddAttributes(trace.StringAttribute(traceLabelKey, fullKey))
span.AddAttributes(trace.Int64Attribute(traceLabelShard, int64(shard)))
return ms.clients[shard].Set(c, fullKey, item.Value, item.Expiration)
}
func (ms Memorystore) SetMulti(items []*CacheItem) error {
c, span := trace.StartSpan(ms.c, traceMemorystoreSetMulti)
defer span.End()
span.AddAttributes(trace.Int64Attribute(traceLabelNumKeys, int64(len(items))))
setMultiForShard := func(shard int, itemIndices map[string]int, shardKeys []string) error {
if len(shardKeys) == 0 {
return nil
}
c, span := trace.StartSpan(c, traceMemorystoreSetMultiShard)
defer span.End()
span.AddAttributes(trace.StringAttribute(traceLabelFirstKey, shardKeys[0]))
span.AddAttributes(trace.Int64Attribute(traceLabelNumKeys, int64(len(shardKeys))))
span.AddAttributes(trace.Int64Attribute(traceLabelShard, int64(shard)))
pipe := ms.clients[shard].TxPipeline()
for i, key := range shardKeys {
item := items[itemIndices[shardKeys[i]]]
pipe.Set(c, key, item.Value, item.Expiration)
}
_, err := pipe.Exec(c)
if err != nil {
return err
}
return nil
}
namespacedKeys, itemIndices, singleShard := ms.shardedNamespacedKeysForItems(items)
if singleShard >= 0 {
if err := setMultiForShard(singleShard, itemIndices, namespacedKeys[singleShard]); err != nil {
return err
}
return nil
}
errs := make(chan error, len(ms.clients))
wg := sync.WaitGroup{}
for shard := 0; shard < len(ms.clients); shard++ {
if len(namespacedKeys[shard]) == 0 {
continue
}
shard := shard
wg.Add(1)
go func() {
defer wg.Done()
shardKeys := namespacedKeys[shard]
if err := setMultiForShard(shard, itemIndices, shardKeys); err != nil {
errs <- err
}
}()
}
wg.Wait()
select {
case err := <-errs:
return err
default:
return nil
}
}
func (ms Memorystore) Namespace(ns string) Memcache {
return Memorystore{ms.c, ms.clients, ns, ms.keyHashFn}
}
func defaultKeyHashFn(key string, shardCount int) int {
return int(xxhash.Sum64String(key) % uint64(shardCount))
}
const (
envMemorystoreIdleTimeoutMs = "memorystore_idle_timeout_ms"
envMemorystorePoolSize = "memorystore_pool_size"
envMemorystorePoolTimeoutMs = "memorystore_pool_timeout_ms"
envMemorystoreReadTimeoutMs = "memorystore_read_timeout_ms"
)
var (
memorystoreIdleTimeout time.Duration
memorystorePoolSize int
memorystorePoolTimeout time.Duration
memorystoreReadTimeout time.Duration
)
func init() {
readTimeoutMsStr := os.Getenv(envMemorystoreReadTimeoutMs)
if readTimeoutMsStr != "" {
timeoutMs, err := strconv.ParseInt(readTimeoutMsStr, 10, 64)
if err != nil {
fmt.Fprintf(os.Stderr, "Failed to parse '%s' value: '%s': %s\n",
envMemorystoreReadTimeoutMs, readTimeoutMsStr, err)
} else if timeoutMs < 1 {
fmt.Fprintf(os.Stderr, "'%s' must be a non-zero non-negative integer\n",
envMemorystorePoolSize)
} else {
memorystoreReadTimeout = time.Duration(timeoutMs) * time.Millisecond
}
}
memorystorePoolSizeStr := os.Getenv(envMemorystorePoolSize)
if memorystorePoolSizeStr != "" {
poolSize, err := strconv.ParseInt(memorystorePoolSizeStr, 10, 64)
if err != nil {
fmt.Fprintf(os.Stderr, "Failed to parse '%s' value: '%s': %s\n",
envMemorystorePoolSize, memorystorePoolSizeStr, err)
} else if poolSize < 1 {
fmt.Fprintf(os.Stderr, "'%s' must be a non-zero non-negative integer\n",
envMemorystorePoolSize)
} else {
memorystorePoolSize = int(poolSize)
}
}
poolTimeoutStr := os.Getenv(envMemorystorePoolTimeoutMs)
if poolTimeoutStr != "" {
timeoutMs, err := strconv.ParseInt(poolTimeoutStr, 10, 64)
if err != nil {
fmt.Fprintf(os.Stderr, "Failed to parse '%s' value: '%s': %s\n",
envMemorystorePoolTimeoutMs, poolTimeoutStr, err)
} else if timeoutMs < 1 {
fmt.Fprintf(os.Stderr, "'%s' must be a non-zero non-negative integer\n",
envMemorystorePoolTimeoutMs)
} else {
memorystorePoolTimeout = time.Duration(timeoutMs) * time.Millisecond
}
}
// From https://cloud.google.com/memorystore/docs/redis/redis-configs:
// The default idle timeout on the managed Redis servers used by Memorystore
// is 0, which is to say the connections are _never_ disconnected by the server.
// The go-redis documentation says that any client-specified value should always
// be less than the Redis server's value, or disabled.
//
// To disable idle connection reaping, specify -1.
idleTimeoutStr := os.Getenv(envMemorystoreIdleTimeoutMs)
if idleTimeoutStr != "" {
timeoutMs, err := strconv.ParseInt(idleTimeoutStr, 10, 64)
if err != nil {
fmt.Fprintf(os.Stderr, "Failed to parse '%s' value: '%s': %s\n",
envMemorystoreIdleTimeoutMs, idleTimeoutStr, err)
} else if timeoutMs == -1 {
memorystoreIdleTimeout = -1
} else if timeoutMs < 1 {
fmt.Fprintf(os.Stderr, "'%s' must be either a non-zero non-negative integer, or -1 to disable idle timeout\n",
envMemorystoreIdleTimeoutMs)
} else {
memorystoreIdleTimeout = time.Duration(timeoutMs) * time.Millisecond
}
}
}
|
// Copyright © 2019 The Knative Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package serving
import (
"context"
"fmt"
"strconv"
"github.com/knative/serving/pkg/apis/autoscaling"
servingv1alpha1 "github.com/knative/serving/pkg/apis/serving/v1alpha1"
servingv1beta1 "github.com/knative/serving/pkg/apis/serving/v1beta1"
corev1 "k8s.io/api/core/v1"
)
// UpdateEnvVars gives the configuration all the env var values listed in the given map of
// vars. Does not touch any environment variables not mentioned, but it can add
// new env vars and change the values of existing ones.
func UpdateEnvVars(template *servingv1alpha1.RevisionTemplateSpec, vars map[string]string) error {
container, err := ContainerOfRevisionTemplate(template)
if err != nil {
return err
}
container.Env = updateEnvVarsFromMap(container.Env, vars)
return nil
}
// UpdateMinScale updates min scale annotation
func UpdateMinScale(template *servingv1alpha1.RevisionTemplateSpec, min int) error {
return UpdateAnnotation(template, autoscaling.MinScaleAnnotationKey, strconv.Itoa(min))
}
// UpdatMaxScale updates max scale annotation
func UpdateMaxScale(template *servingv1alpha1.RevisionTemplateSpec, max int) error {
return UpdateAnnotation(template, autoscaling.MaxScaleAnnotationKey, strconv.Itoa(max))
}
// UpdateConcurrencyTarget updates container concurrency annotation
func UpdateConcurrencyTarget(template *servingv1alpha1.RevisionTemplateSpec, target int) error {
// TODO(toVersus): Remove the following validation once serving library is updated to v0.8.0
// and just rely on ValidateAnnotations method.
if target < autoscaling.TargetMin {
return fmt.Errorf("Invalid 'concurrency-target' value: must be an integer greater than 0: %s",
autoscaling.TargetAnnotationKey)
}
return UpdateAnnotation(template, autoscaling.TargetAnnotationKey, strconv.Itoa(target))
}
// UpdateConcurrencyLimit updates container concurrency limit
func UpdateConcurrencyLimit(template *servingv1alpha1.RevisionTemplateSpec, limit int) error {
cc := servingv1beta1.RevisionContainerConcurrencyType(limit)
// Validate input limit
ctx := context.Background()
if err := cc.Validate(ctx).ViaField("spec.containerConcurrency"); err != nil {
return fmt.Errorf("Invalid 'concurrency-limit' value: %s", err)
}
template.Spec.ContainerConcurrency = cc
return nil
}
// UpdateAnnotation updates (or adds) an annotation to the given service
func UpdateAnnotation(template *servingv1alpha1.RevisionTemplateSpec, annotation string, value string) error {
annoMap := template.Annotations
if annoMap == nil {
annoMap = make(map[string]string)
template.Annotations = annoMap
}
// Validate autoscaling annotations and returns error if invalid input provided
// without changing the existing spec
in := make(map[string]string)
in[annotation] = value
if err := autoscaling.ValidateAnnotations(in); err != nil {
return err
}
annoMap[annotation] = value
return nil
}
// EnvToMap is an utility function to translate between the API list form of env vars, and the
// more convenient map form.
func EnvToMap(vars []corev1.EnvVar) (map[string]string, error) {
result := map[string]string{}
for _, envVar := range vars {
_, present := result[envVar.Name]
if present {
return nil, fmt.Errorf("env var name present more than once: %v", envVar.Name)
}
result[envVar.Name] = envVar.Value
}
return result, nil
}
// UpdateImage a given image
func UpdateImage(template *servingv1alpha1.RevisionTemplateSpec, image string) error {
container, err := ContainerOfRevisionTemplate(template)
if err != nil {
return err
}
container.Image = image
return nil
}
// UpdateContainerPort updates container with a give port
func UpdateContainerPort(template *servingv1alpha1.RevisionTemplateSpec, port int32) error {
container, err := ContainerOfRevisionTemplate(template)
if err != nil {
return err
}
container.Ports = []corev1.ContainerPort{{
ContainerPort: port,
}}
return nil
}
// UpdateResources updates resources as requested
func UpdateResources(template *servingv1alpha1.RevisionTemplateSpec, requestsResourceList corev1.ResourceList, limitsResourceList corev1.ResourceList) error {
container, err := ContainerOfRevisionTemplate(template)
if err != nil {
return err
}
if container.Resources.Requests == nil {
container.Resources.Requests = corev1.ResourceList{}
}
for k, v := range requestsResourceList {
container.Resources.Requests[k] = v
}
if container.Resources.Limits == nil {
container.Resources.Limits = corev1.ResourceList{}
}
for k, v := range limitsResourceList {
container.Resources.Limits[k] = v
}
return nil
}
// =======================================================================================
func updateEnvVarsFromMap(env []corev1.EnvVar, vars map[string]string) []corev1.EnvVar {
set := make(map[string]bool)
for i := range env {
envVar := &env[i]
value, present := vars[envVar.Name]
if present {
envVar.Value = value
set[envVar.Name] = true
}
}
for name, value := range vars {
if !set[name] {
env = append(
env,
corev1.EnvVar{
Name: name,
Value: value,
})
}
}
return env
}
|
<filename>src/pyenbc/filehelper/__init__.py
"""
@file
@brief Shortcuts to filehelper
"""
from .jython_helper import run_jython, get_jython_jar, is_java_installed, download_java_standalone
|
<filename>MyPkg/Application/ACPI_Get_DSDT/ACPI_Get_DSDT.h
#include <Uefi.h>
#include <Library/UefiLib.h>
#include <Library/BaseLib.h>
#include <Library/BaseMemoryLib.h>
#include <Library/PrintLib.h>
#include <Library/MemoryAllocationLib.h>
#include <Library/UefiApplicationEntryPoint.h>
#include <Library/UefiBootServicesTableLib.h>
#include <Protocol/LoadedImage.h>
#include <Protocol/EfiShellParameters.h>
#include <Guid/Acpi.h>
#include <COMPAL_AcpiLib.h>
|
// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
use std::fmt::Write;
/// See [`AssignInstanceInput`](crate::input::AssignInstanceInput)
pub mod assign_instance_input {
/// A builder for [`AssignInstanceInput`](crate::input::AssignInstanceInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) instance_id: std::option::Option<std::string::String>,
pub(crate) layer_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl Builder {
/// <p>The instance ID.</p>
pub fn instance_id(mut self, input: impl Into<std::string::String>) -> Self {
self.instance_id = Some(input.into());
self
}
/// <p>The instance ID.</p>
pub fn set_instance_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.instance_id = input;
self
}
/// Appends an item to `layer_ids`.
///
/// To override the contents of this collection use [`set_layer_ids`](Self::set_layer_ids).
///
/// <p>The layer ID, which must correspond to a custom layer. You cannot assign a registered instance to a built-in layer.</p>
pub fn layer_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.layer_ids.unwrap_or_default();
v.push(input.into());
self.layer_ids = Some(v);
self
}
/// <p>The layer ID, which must correspond to a custom layer. You cannot assign a registered instance to a built-in layer.</p>
pub fn set_layer_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.layer_ids = input;
self
}
/// Consumes the builder and constructs a [`AssignInstanceInput`](crate::input::AssignInstanceInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::AssignInstanceInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::AssignInstanceInput {
instance_id: self.instance_id,
layer_ids: self.layer_ids,
})
}
}
}
#[doc(hidden)]
pub type AssignInstanceInputOperationOutputAlias = crate::operation::AssignInstance;
#[doc(hidden)]
pub type AssignInstanceInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl AssignInstanceInput {
/// Consumes the builder and constructs an Operation<[`AssignInstance`](crate::operation::AssignInstance)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::AssignInstance,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::AssignInstanceInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::AssignInstanceInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::AssignInstanceInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.AssignInstance",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_assign_instance(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::AssignInstance::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"AssignInstance",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`AssignInstanceInput`](crate::input::AssignInstanceInput)
pub fn builder() -> crate::input::assign_instance_input::Builder {
crate::input::assign_instance_input::Builder::default()
}
}
/// See [`AssignVolumeInput`](crate::input::AssignVolumeInput)
pub mod assign_volume_input {
/// A builder for [`AssignVolumeInput`](crate::input::AssignVolumeInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) volume_id: std::option::Option<std::string::String>,
pub(crate) instance_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The volume ID.</p>
pub fn volume_id(mut self, input: impl Into<std::string::String>) -> Self {
self.volume_id = Some(input.into());
self
}
/// <p>The volume ID.</p>
pub fn set_volume_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.volume_id = input;
self
}
/// <p>The instance ID.</p>
pub fn instance_id(mut self, input: impl Into<std::string::String>) -> Self {
self.instance_id = Some(input.into());
self
}
/// <p>The instance ID.</p>
pub fn set_instance_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.instance_id = input;
self
}
/// Consumes the builder and constructs a [`AssignVolumeInput`](crate::input::AssignVolumeInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::AssignVolumeInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::AssignVolumeInput {
volume_id: self.volume_id,
instance_id: self.instance_id,
})
}
}
}
#[doc(hidden)]
pub type AssignVolumeInputOperationOutputAlias = crate::operation::AssignVolume;
#[doc(hidden)]
pub type AssignVolumeInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl AssignVolumeInput {
/// Consumes the builder and constructs an Operation<[`AssignVolume`](crate::operation::AssignVolume)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::AssignVolume,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::AssignVolumeInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::AssignVolumeInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::AssignVolumeInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.AssignVolume",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_assign_volume(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::AssignVolume::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"AssignVolume",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`AssignVolumeInput`](crate::input::AssignVolumeInput)
pub fn builder() -> crate::input::assign_volume_input::Builder {
crate::input::assign_volume_input::Builder::default()
}
}
/// See [`AssociateElasticIpInput`](crate::input::AssociateElasticIpInput)
pub mod associate_elastic_ip_input {
/// A builder for [`AssociateElasticIpInput`](crate::input::AssociateElasticIpInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) elastic_ip: std::option::Option<std::string::String>,
pub(crate) instance_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The Elastic IP address.</p>
pub fn elastic_ip(mut self, input: impl Into<std::string::String>) -> Self {
self.elastic_ip = Some(input.into());
self
}
/// <p>The Elastic IP address.</p>
pub fn set_elastic_ip(mut self, input: std::option::Option<std::string::String>) -> Self {
self.elastic_ip = input;
self
}
/// <p>The instance ID.</p>
pub fn instance_id(mut self, input: impl Into<std::string::String>) -> Self {
self.instance_id = Some(input.into());
self
}
/// <p>The instance ID.</p>
pub fn set_instance_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.instance_id = input;
self
}
/// Consumes the builder and constructs a [`AssociateElasticIpInput`](crate::input::AssociateElasticIpInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::AssociateElasticIpInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::AssociateElasticIpInput {
elastic_ip: self.elastic_ip,
instance_id: self.instance_id,
})
}
}
}
#[doc(hidden)]
pub type AssociateElasticIpInputOperationOutputAlias = crate::operation::AssociateElasticIp;
#[doc(hidden)]
pub type AssociateElasticIpInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl AssociateElasticIpInput {
/// Consumes the builder and constructs an Operation<[`AssociateElasticIp`](crate::operation::AssociateElasticIp)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::AssociateElasticIp,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::AssociateElasticIpInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::AssociateElasticIpInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::AssociateElasticIpInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.AssociateElasticIp",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_associate_elastic_ip(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::AssociateElasticIp::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"AssociateElasticIp",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`AssociateElasticIpInput`](crate::input::AssociateElasticIpInput)
pub fn builder() -> crate::input::associate_elastic_ip_input::Builder {
crate::input::associate_elastic_ip_input::Builder::default()
}
}
/// See [`AttachElasticLoadBalancerInput`](crate::input::AttachElasticLoadBalancerInput)
pub mod attach_elastic_load_balancer_input {
/// A builder for [`AttachElasticLoadBalancerInput`](crate::input::AttachElasticLoadBalancerInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) elastic_load_balancer_name: std::option::Option<std::string::String>,
pub(crate) layer_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The Elastic Load Balancing instance's name.</p>
pub fn elastic_load_balancer_name(mut self, input: impl Into<std::string::String>) -> Self {
self.elastic_load_balancer_name = Some(input.into());
self
}
/// <p>The Elastic Load Balancing instance's name.</p>
pub fn set_elastic_load_balancer_name(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.elastic_load_balancer_name = input;
self
}
/// <p>The ID of the layer to which the Elastic Load Balancing instance is to be attached.</p>
pub fn layer_id(mut self, input: impl Into<std::string::String>) -> Self {
self.layer_id = Some(input.into());
self
}
/// <p>The ID of the layer to which the Elastic Load Balancing instance is to be attached.</p>
pub fn set_layer_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.layer_id = input;
self
}
/// Consumes the builder and constructs a [`AttachElasticLoadBalancerInput`](crate::input::AttachElasticLoadBalancerInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::AttachElasticLoadBalancerInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::AttachElasticLoadBalancerInput {
elastic_load_balancer_name: self.elastic_load_balancer_name,
layer_id: self.layer_id,
})
}
}
}
#[doc(hidden)]
pub type AttachElasticLoadBalancerInputOperationOutputAlias =
crate::operation::AttachElasticLoadBalancer;
#[doc(hidden)]
pub type AttachElasticLoadBalancerInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl AttachElasticLoadBalancerInput {
/// Consumes the builder and constructs an Operation<[`AttachElasticLoadBalancer`](crate::operation::AttachElasticLoadBalancer)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::AttachElasticLoadBalancer,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::AttachElasticLoadBalancerInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::AttachElasticLoadBalancerInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::AttachElasticLoadBalancerInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.AttachElasticLoadBalancer",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_attach_elastic_load_balancer(
&self,
)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::AttachElasticLoadBalancer::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"AttachElasticLoadBalancer",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`AttachElasticLoadBalancerInput`](crate::input::AttachElasticLoadBalancerInput)
pub fn builder() -> crate::input::attach_elastic_load_balancer_input::Builder {
crate::input::attach_elastic_load_balancer_input::Builder::default()
}
}
/// See [`CloneStackInput`](crate::input::CloneStackInput)
pub mod clone_stack_input {
/// A builder for [`CloneStackInput`](crate::input::CloneStackInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) source_stack_id: std::option::Option<std::string::String>,
pub(crate) name: std::option::Option<std::string::String>,
pub(crate) region: std::option::Option<std::string::String>,
pub(crate) vpc_id: std::option::Option<std::string::String>,
pub(crate) attributes: std::option::Option<
std::collections::HashMap<crate::model::StackAttributesKeys, std::string::String>,
>,
pub(crate) service_role_arn: std::option::Option<std::string::String>,
pub(crate) default_instance_profile_arn: std::option::Option<std::string::String>,
pub(crate) default_os: std::option::Option<std::string::String>,
pub(crate) hostname_theme: std::option::Option<std::string::String>,
pub(crate) default_availability_zone: std::option::Option<std::string::String>,
pub(crate) default_subnet_id: std::option::Option<std::string::String>,
pub(crate) custom_json: std::option::Option<std::string::String>,
pub(crate) configuration_manager:
std::option::Option<crate::model::StackConfigurationManager>,
pub(crate) chef_configuration: std::option::Option<crate::model::ChefConfiguration>,
pub(crate) use_custom_cookbooks: std::option::Option<bool>,
pub(crate) use_opsworks_security_groups: std::option::Option<bool>,
pub(crate) custom_cookbooks_source: std::option::Option<crate::model::Source>,
pub(crate) default_ssh_key_name: std::option::Option<std::string::String>,
pub(crate) clone_permissions: std::option::Option<bool>,
pub(crate) clone_app_ids: std::option::Option<std::vec::Vec<std::string::String>>,
pub(crate) default_root_device_type: std::option::Option<crate::model::RootDeviceType>,
pub(crate) agent_version: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The source stack ID.</p>
pub fn source_stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.source_stack_id = Some(input.into());
self
}
/// <p>The source stack ID.</p>
pub fn set_source_stack_id(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.source_stack_id = input;
self
}
/// <p>The cloned stack name.</p>
pub fn name(mut self, input: impl Into<std::string::String>) -> Self {
self.name = Some(input.into());
self
}
/// <p>The cloned stack name.</p>
pub fn set_name(mut self, input: std::option::Option<std::string::String>) -> Self {
self.name = input;
self
}
/// <p>The cloned stack AWS region, such as "ap-northeast-2". For more information about AWS regions, see
/// <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and Endpoints</a>.</p>
pub fn region(mut self, input: impl Into<std::string::String>) -> Self {
self.region = Some(input.into());
self
}
/// <p>The cloned stack AWS region, such as "ap-northeast-2". For more information about AWS regions, see
/// <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and Endpoints</a>.</p>
pub fn set_region(mut self, input: std::option::Option<std::string::String>) -> Self {
self.region = input;
self
}
/// <p>The ID of the VPC that the cloned stack is to be launched into. It must be in the specified region. All
/// instances are launched into this VPC, and you cannot change the ID later.</p>
/// <ul>
/// <li>
/// <p>If your account supports EC2 Classic, the default value is no VPC.</p>
/// </li>
/// <li>
/// <p>If your account does not support EC2 Classic, the default value is the default VPC for the specified region.</p>
/// </li>
/// </ul>
/// <p>If the VPC ID corresponds to a default VPC and you have specified either the
/// <code>DefaultAvailabilityZone</code> or the <code>DefaultSubnetId</code> parameter only,
/// AWS OpsWorks Stacks infers the value of the other parameter. If you specify neither parameter, AWS OpsWorks Stacks sets
/// these parameters to the first valid Availability Zone for the specified region and the
/// corresponding default VPC subnet ID, respectively. </p>
/// <p>If you specify a nondefault VPC ID, note the following:</p>
/// <ul>
/// <li>
/// <p>It must belong to a VPC in your account that is in the specified region.</p>
/// </li>
/// <li>
/// <p>You must specify a value for <code>DefaultSubnetId</code>.</p>
/// </li>
/// </ul>
/// <p>For more information about how to use AWS OpsWorks Stacks with a VPC, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-vpc.html">Running a Stack in a
/// VPC</a>. For more information about default VPC and EC2 Classic, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html">Supported
/// Platforms</a>. </p>
pub fn vpc_id(mut self, input: impl Into<std::string::String>) -> Self {
self.vpc_id = Some(input.into());
self
}
/// <p>The ID of the VPC that the cloned stack is to be launched into. It must be in the specified region. All
/// instances are launched into this VPC, and you cannot change the ID later.</p>
/// <ul>
/// <li>
/// <p>If your account supports EC2 Classic, the default value is no VPC.</p>
/// </li>
/// <li>
/// <p>If your account does not support EC2 Classic, the default value is the default VPC for the specified region.</p>
/// </li>
/// </ul>
/// <p>If the VPC ID corresponds to a default VPC and you have specified either the
/// <code>DefaultAvailabilityZone</code> or the <code>DefaultSubnetId</code> parameter only,
/// AWS OpsWorks Stacks infers the value of the other parameter. If you specify neither parameter, AWS OpsWorks Stacks sets
/// these parameters to the first valid Availability Zone for the specified region and the
/// corresponding default VPC subnet ID, respectively. </p>
/// <p>If you specify a nondefault VPC ID, note the following:</p>
/// <ul>
/// <li>
/// <p>It must belong to a VPC in your account that is in the specified region.</p>
/// </li>
/// <li>
/// <p>You must specify a value for <code>DefaultSubnetId</code>.</p>
/// </li>
/// </ul>
/// <p>For more information about how to use AWS OpsWorks Stacks with a VPC, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-vpc.html">Running a Stack in a
/// VPC</a>. For more information about default VPC and EC2 Classic, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html">Supported
/// Platforms</a>. </p>
pub fn set_vpc_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.vpc_id = input;
self
}
/// Adds a key-value pair to `attributes`.
///
/// To override the contents of this collection use [`set_attributes`](Self::set_attributes).
///
/// <p>A list of stack attributes and values as key/value pairs to be added to the cloned stack.</p>
pub fn attributes(
mut self,
k: impl Into<crate::model::StackAttributesKeys>,
v: impl Into<std::string::String>,
) -> Self {
let mut hash_map = self.attributes.unwrap_or_default();
hash_map.insert(k.into(), v.into());
self.attributes = Some(hash_map);
self
}
/// <p>A list of stack attributes and values as key/value pairs to be added to the cloned stack.</p>
pub fn set_attributes(
mut self,
input: std::option::Option<
std::collections::HashMap<crate::model::StackAttributesKeys, std::string::String>,
>,
) -> Self {
self.attributes = input;
self
}
/// <p>The stack AWS Identity and Access Management (IAM) role, which allows AWS OpsWorks Stacks to work with AWS
/// resources on your behalf. You must set this parameter to the Amazon Resource Name (ARN) for an
/// existing IAM role. If you create a stack by using the AWS OpsWorks Stacks console, it creates the role for
/// you. You can obtain an existing stack's IAM ARN programmatically by calling
/// <a>DescribePermissions</a>. For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
/// <note>
/// <p>You must set this parameter to a valid service role ARN or the action will fail; there is no default value. You can specify the source stack's service role ARN, if you prefer, but you must do so explicitly.</p>
/// </note>
pub fn service_role_arn(mut self, input: impl Into<std::string::String>) -> Self {
self.service_role_arn = Some(input.into());
self
}
/// <p>The stack AWS Identity and Access Management (IAM) role, which allows AWS OpsWorks Stacks to work with AWS
/// resources on your behalf. You must set this parameter to the Amazon Resource Name (ARN) for an
/// existing IAM role. If you create a stack by using the AWS OpsWorks Stacks console, it creates the role for
/// you. You can obtain an existing stack's IAM ARN programmatically by calling
/// <a>DescribePermissions</a>. For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
/// <note>
/// <p>You must set this parameter to a valid service role ARN or the action will fail; there is no default value. You can specify the source stack's service role ARN, if you prefer, but you must do so explicitly.</p>
/// </note>
pub fn set_service_role_arn(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.service_role_arn = input;
self
}
/// <p>The Amazon Resource Name (ARN) of an IAM profile that is the default profile for all of the stack's EC2 instances.
/// For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub fn default_instance_profile_arn(
mut self,
input: impl Into<std::string::String>,
) -> Self {
self.default_instance_profile_arn = Some(input.into());
self
}
/// <p>The Amazon Resource Name (ARN) of an IAM profile that is the default profile for all of the stack's EC2 instances.
/// For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub fn set_default_instance_profile_arn(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.default_instance_profile_arn = input;
self
}
/// <p>The stack's operating system, which must be set to one of the following.</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux
/// 2016.09</code>, <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Microsoft Windows Server 2012 R2 Base</code>, <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// <li>
/// <p>A custom AMI: <code>Custom</code>. You specify the custom AMI you want to use when
/// you create instances. For more information about how to use custom AMIs with OpsWorks, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using
/// Custom AMIs</a>.</p>
/// </li>
/// </ul>
/// <p>The default option is the parent stack's operating system.
/// For more information about supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
/// <note>
/// <p>You can specify a different Linux operating system for the cloned stack, but you cannot change from Linux to Windows or Windows to Linux.</p>
/// </note>
pub fn default_os(mut self, input: impl Into<std::string::String>) -> Self {
self.default_os = Some(input.into());
self
}
/// <p>The stack's operating system, which must be set to one of the following.</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux
/// 2016.09</code>, <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Microsoft Windows Server 2012 R2 Base</code>, <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// <li>
/// <p>A custom AMI: <code>Custom</code>. You specify the custom AMI you want to use when
/// you create instances. For more information about how to use custom AMIs with OpsWorks, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using
/// Custom AMIs</a>.</p>
/// </li>
/// </ul>
/// <p>The default option is the parent stack's operating system.
/// For more information about supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
/// <note>
/// <p>You can specify a different Linux operating system for the cloned stack, but you cannot change from Linux to Windows or Windows to Linux.</p>
/// </note>
pub fn set_default_os(mut self, input: std::option::Option<std::string::String>) -> Self {
self.default_os = input;
self
}
/// <p>The stack's host name theme, with spaces are replaced by underscores. The theme is used to
/// generate host names for the stack's instances. By default, <code>HostnameTheme</code> is set
/// to <code>Layer_Dependent</code>, which creates host names by appending integers to the layer's
/// short name. The other themes are:</p>
/// <ul>
/// <li>
/// <p>
/// <code>Baked_Goods</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Clouds</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Europe_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Fruits</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Greek_Deities_and_Titans</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Legendary_creatures_from_Japan</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Planets_and_Moons</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Roman_Deities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Scottish_Islands</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>US_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Wild_Cats</code>
/// </p>
/// </li>
/// </ul>
/// <p>To obtain a generated host name, call <code>GetHostNameSuggestion</code>, which returns a
/// host name based on the current theme.</p>
pub fn hostname_theme(mut self, input: impl Into<std::string::String>) -> Self {
self.hostname_theme = Some(input.into());
self
}
/// <p>The stack's host name theme, with spaces are replaced by underscores. The theme is used to
/// generate host names for the stack's instances. By default, <code>HostnameTheme</code> is set
/// to <code>Layer_Dependent</code>, which creates host names by appending integers to the layer's
/// short name. The other themes are:</p>
/// <ul>
/// <li>
/// <p>
/// <code>Baked_Goods</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Clouds</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Europe_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Fruits</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Greek_Deities_and_Titans</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Legendary_creatures_from_Japan</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Planets_and_Moons</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Roman_Deities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Scottish_Islands</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>US_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Wild_Cats</code>
/// </p>
/// </li>
/// </ul>
/// <p>To obtain a generated host name, call <code>GetHostNameSuggestion</code>, which returns a
/// host name based on the current theme.</p>
pub fn set_hostname_theme(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.hostname_theme = input;
self
}
/// <p>The cloned stack's default Availability Zone, which must be in the specified region. For more
/// information, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and
/// Endpoints</a>. If you also specify a value for <code>DefaultSubnetId</code>, the subnet must
/// be in the same zone. For more information, see the <code>VpcId</code> parameter description.
/// </p>
pub fn default_availability_zone(mut self, input: impl Into<std::string::String>) -> Self {
self.default_availability_zone = Some(input.into());
self
}
/// <p>The cloned stack's default Availability Zone, which must be in the specified region. For more
/// information, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and
/// Endpoints</a>. If you also specify a value for <code>DefaultSubnetId</code>, the subnet must
/// be in the same zone. For more information, see the <code>VpcId</code> parameter description.
/// </p>
pub fn set_default_availability_zone(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.default_availability_zone = input;
self
}
/// <p>The stack's default VPC subnet ID. This parameter is required if you specify a value for the
/// <code>VpcId</code> parameter. All instances are launched into this subnet unless you specify
/// otherwise when you create the instance. If you also specify a value for
/// <code>DefaultAvailabilityZone</code>, the subnet must be in that zone. For information on
/// default values and when this parameter is required, see the <code>VpcId</code> parameter
/// description. </p>
pub fn default_subnet_id(mut self, input: impl Into<std::string::String>) -> Self {
self.default_subnet_id = Some(input.into());
self
}
/// <p>The stack's default VPC subnet ID. This parameter is required if you specify a value for the
/// <code>VpcId</code> parameter. All instances are launched into this subnet unless you specify
/// otherwise when you create the instance. If you also specify a value for
/// <code>DefaultAvailabilityZone</code>, the subnet must be in that zone. For information on
/// default values and when this parameter is required, see the <code>VpcId</code> parameter
/// description. </p>
pub fn set_default_subnet_id(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.default_subnet_id = input;
self
}
/// <p>A string that contains user-defined, custom JSON. It is used to override the corresponding default stack configuration JSON values. The string should be in the following format:</p>
/// <p>
/// <code>"{\"key1\": \"value1\", \"key2\": \"value2\",...}"</code>
/// </p>
/// <p>For more information about custom JSON, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html">Use Custom JSON to
/// Modify the Stack Configuration Attributes</a>
/// </p>
pub fn custom_json(mut self, input: impl Into<std::string::String>) -> Self {
self.custom_json = Some(input.into());
self
}
/// <p>A string that contains user-defined, custom JSON. It is used to override the corresponding default stack configuration JSON values. The string should be in the following format:</p>
/// <p>
/// <code>"{\"key1\": \"value1\", \"key2\": \"value2\",...}"</code>
/// </p>
/// <p>For more information about custom JSON, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html">Use Custom JSON to
/// Modify the Stack Configuration Attributes</a>
/// </p>
pub fn set_custom_json(mut self, input: std::option::Option<std::string::String>) -> Self {
self.custom_json = input;
self
}
/// <p>The configuration manager. When you clone a stack we recommend that you use the configuration manager to specify the Chef version: 12, 11.10, or 11.4 for Linux stacks, or 12.2 for Windows stacks. The default value for Linux stacks is currently 12.</p>
pub fn configuration_manager(
mut self,
input: crate::model::StackConfigurationManager,
) -> Self {
self.configuration_manager = Some(input);
self
}
/// <p>The configuration manager. When you clone a stack we recommend that you use the configuration manager to specify the Chef version: 12, 11.10, or 11.4 for Linux stacks, or 12.2 for Windows stacks. The default value for Linux stacks is currently 12.</p>
pub fn set_configuration_manager(
mut self,
input: std::option::Option<crate::model::StackConfigurationManager>,
) -> Self {
self.configuration_manager = input;
self
}
/// <p>A <code>ChefConfiguration</code> object that specifies whether to enable Berkshelf and the
/// Berkshelf version on Chef 11.10 stacks. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New Stack</a>.</p>
pub fn chef_configuration(mut self, input: crate::model::ChefConfiguration) -> Self {
self.chef_configuration = Some(input);
self
}
/// <p>A <code>ChefConfiguration</code> object that specifies whether to enable Berkshelf and the
/// Berkshelf version on Chef 11.10 stacks. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New Stack</a>.</p>
pub fn set_chef_configuration(
mut self,
input: std::option::Option<crate::model::ChefConfiguration>,
) -> Self {
self.chef_configuration = input;
self
}
/// <p>Whether to use custom cookbooks.</p>
pub fn use_custom_cookbooks(mut self, input: bool) -> Self {
self.use_custom_cookbooks = Some(input);
self
}
/// <p>Whether to use custom cookbooks.</p>
pub fn set_use_custom_cookbooks(mut self, input: std::option::Option<bool>) -> Self {
self.use_custom_cookbooks = input;
self
}
/// <p>Whether to associate the AWS OpsWorks Stacks built-in security groups with the stack's layers.</p>
/// <p>AWS OpsWorks Stacks provides a standard set of built-in security groups, one for each layer, which are
/// associated with layers by default. With <code>UseOpsworksSecurityGroups</code> you can instead
/// provide your own custom security groups. <code>UseOpsworksSecurityGroups</code> has the
/// following settings: </p>
/// <ul>
/// <li>
/// <p>True - AWS OpsWorks Stacks automatically associates the appropriate built-in security group with each layer (default setting). You can associate additional security groups with a layer after you create it but you cannot delete the built-in security group.</p>
/// </li>
/// <li>
/// <p>False - AWS OpsWorks Stacks does not associate built-in security groups with layers. You must create appropriate Amazon Elastic Compute Cloud (Amazon EC2) security groups and associate a security group with each layer that you create. However, you can still manually associate a built-in security group with a layer on creation; custom security groups are required only for those layers that need custom settings.</p>
/// </li>
/// </ul>
/// <p>For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New
/// Stack</a>.</p>
pub fn use_opsworks_security_groups(mut self, input: bool) -> Self {
self.use_opsworks_security_groups = Some(input);
self
}
/// <p>Whether to associate the AWS OpsWorks Stacks built-in security groups with the stack's layers.</p>
/// <p>AWS OpsWorks Stacks provides a standard set of built-in security groups, one for each layer, which are
/// associated with layers by default. With <code>UseOpsworksSecurityGroups</code> you can instead
/// provide your own custom security groups. <code>UseOpsworksSecurityGroups</code> has the
/// following settings: </p>
/// <ul>
/// <li>
/// <p>True - AWS OpsWorks Stacks automatically associates the appropriate built-in security group with each layer (default setting). You can associate additional security groups with a layer after you create it but you cannot delete the built-in security group.</p>
/// </li>
/// <li>
/// <p>False - AWS OpsWorks Stacks does not associate built-in security groups with layers. You must create appropriate Amazon Elastic Compute Cloud (Amazon EC2) security groups and associate a security group with each layer that you create. However, you can still manually associate a built-in security group with a layer on creation; custom security groups are required only for those layers that need custom settings.</p>
/// </li>
/// </ul>
/// <p>For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New
/// Stack</a>.</p>
pub fn set_use_opsworks_security_groups(
mut self,
input: std::option::Option<bool>,
) -> Self {
self.use_opsworks_security_groups = input;
self
}
/// <p>Contains the information required to retrieve an app or cookbook from a repository. For more information,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html">Adding Apps</a> or <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook.html">Cookbooks and Recipes</a>.</p>
pub fn custom_cookbooks_source(mut self, input: crate::model::Source) -> Self {
self.custom_cookbooks_source = Some(input);
self
}
/// <p>Contains the information required to retrieve an app or cookbook from a repository. For more information,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html">Adding Apps</a> or <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook.html">Cookbooks and Recipes</a>.</p>
pub fn set_custom_cookbooks_source(
mut self,
input: std::option::Option<crate::model::Source>,
) -> Self {
self.custom_cookbooks_source = input;
self
}
/// <p>A default Amazon EC2 key pair name. The default value is none. If you specify a key pair name, AWS
/// OpsWorks installs the public key on the instance and you can use the private key with an SSH
/// client to log in to the instance. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-ssh.html"> Using SSH to
/// Communicate with an Instance</a> and <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-ssh-access.html"> Managing SSH
/// Access</a>. You can override this setting by specifying a different key pair, or no key
/// pair, when you <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-add.html">
/// create an instance</a>. </p>
pub fn default_ssh_key_name(mut self, input: impl Into<std::string::String>) -> Self {
self.default_ssh_key_name = Some(input.into());
self
}
/// <p>A default Amazon EC2 key pair name. The default value is none. If you specify a key pair name, AWS
/// OpsWorks installs the public key on the instance and you can use the private key with an SSH
/// client to log in to the instance. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-ssh.html"> Using SSH to
/// Communicate with an Instance</a> and <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-ssh-access.html"> Managing SSH
/// Access</a>. You can override this setting by specifying a different key pair, or no key
/// pair, when you <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-add.html">
/// create an instance</a>. </p>
pub fn set_default_ssh_key_name(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.default_ssh_key_name = input;
self
}
/// <p>Whether to clone the source stack's permissions.</p>
pub fn clone_permissions(mut self, input: bool) -> Self {
self.clone_permissions = Some(input);
self
}
/// <p>Whether to clone the source stack's permissions.</p>
pub fn set_clone_permissions(mut self, input: std::option::Option<bool>) -> Self {
self.clone_permissions = input;
self
}
/// Appends an item to `clone_app_ids`.
///
/// To override the contents of this collection use [`set_clone_app_ids`](Self::set_clone_app_ids).
///
/// <p>A list of source stack app IDs to be included in the cloned stack.</p>
pub fn clone_app_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.clone_app_ids.unwrap_or_default();
v.push(input.into());
self.clone_app_ids = Some(v);
self
}
/// <p>A list of source stack app IDs to be included in the cloned stack.</p>
pub fn set_clone_app_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.clone_app_ids = input;
self
}
/// <p>The default root device type. This value is used by default for all instances in the cloned
/// stack, but you can override it when you create an instance. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device">Storage for the Root Device</a>.</p>
pub fn default_root_device_type(mut self, input: crate::model::RootDeviceType) -> Self {
self.default_root_device_type = Some(input);
self
}
/// <p>The default root device type. This value is used by default for all instances in the cloned
/// stack, but you can override it when you create an instance. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device">Storage for the Root Device</a>.</p>
pub fn set_default_root_device_type(
mut self,
input: std::option::Option<crate::model::RootDeviceType>,
) -> Self {
self.default_root_device_type = input;
self
}
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>Auto-update - Set this parameter to <code>LATEST</code>. AWS OpsWorks Stacks
/// automatically installs new agent versions on the stack's instances as soon as
/// they are available.</p>
/// </li>
/// <li>
/// <p>Fixed version - Set this parameter to your preferred agent version. To update
/// the agent version, you must edit the stack configuration and specify a new version.
/// AWS OpsWorks Stacks then automatically installs that version on the stack's instances.</p>
/// </li>
/// </ul>
/// <p>The default setting is <code>LATEST</code>. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>. AgentVersion cannot be set to Chef 12.2.</p>
/// <note>
/// <p>You can also specify an agent version when you create or update an instance, which overrides the stack's default setting.</p>
/// </note>
pub fn agent_version(mut self, input: impl Into<std::string::String>) -> Self {
self.agent_version = Some(input.into());
self
}
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>Auto-update - Set this parameter to <code>LATEST</code>. AWS OpsWorks Stacks
/// automatically installs new agent versions on the stack's instances as soon as
/// they are available.</p>
/// </li>
/// <li>
/// <p>Fixed version - Set this parameter to your preferred agent version. To update
/// the agent version, you must edit the stack configuration and specify a new version.
/// AWS OpsWorks Stacks then automatically installs that version on the stack's instances.</p>
/// </li>
/// </ul>
/// <p>The default setting is <code>LATEST</code>. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>. AgentVersion cannot be set to Chef 12.2.</p>
/// <note>
/// <p>You can also specify an agent version when you create or update an instance, which overrides the stack's default setting.</p>
/// </note>
pub fn set_agent_version(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.agent_version = input;
self
}
/// Consumes the builder and constructs a [`CloneStackInput`](crate::input::CloneStackInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::CloneStackInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::CloneStackInput {
source_stack_id: self.source_stack_id,
name: self.name,
region: self.region,
vpc_id: self.vpc_id,
attributes: self.attributes,
service_role_arn: self.service_role_arn,
default_instance_profile_arn: self.default_instance_profile_arn,
default_os: self.default_os,
hostname_theme: self.hostname_theme,
default_availability_zone: self.default_availability_zone,
default_subnet_id: self.default_subnet_id,
custom_json: self.custom_json,
configuration_manager: self.configuration_manager,
chef_configuration: self.chef_configuration,
use_custom_cookbooks: self.use_custom_cookbooks,
use_opsworks_security_groups: self.use_opsworks_security_groups,
custom_cookbooks_source: self.custom_cookbooks_source,
default_ssh_key_name: self.default_ssh_key_name,
clone_permissions: self.clone_permissions,
clone_app_ids: self.clone_app_ids,
default_root_device_type: self.default_root_device_type,
agent_version: self.agent_version,
})
}
}
}
#[doc(hidden)]
pub type CloneStackInputOperationOutputAlias = crate::operation::CloneStack;
#[doc(hidden)]
pub type CloneStackInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl CloneStackInput {
/// Consumes the builder and constructs an Operation<[`CloneStack`](crate::operation::CloneStack)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::CloneStack,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::CloneStackInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::CloneStackInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::CloneStackInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.CloneStack",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_clone_stack(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::CloneStack::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"CloneStack",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`CloneStackInput`](crate::input::CloneStackInput)
pub fn builder() -> crate::input::clone_stack_input::Builder {
crate::input::clone_stack_input::Builder::default()
}
}
/// See [`CreateAppInput`](crate::input::CreateAppInput)
pub mod create_app_input {
/// A builder for [`CreateAppInput`](crate::input::CreateAppInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) shortname: std::option::Option<std::string::String>,
pub(crate) name: std::option::Option<std::string::String>,
pub(crate) description: std::option::Option<std::string::String>,
pub(crate) data_sources: std::option::Option<std::vec::Vec<crate::model::DataSource>>,
pub(crate) r#type: std::option::Option<crate::model::AppType>,
pub(crate) app_source: std::option::Option<crate::model::Source>,
pub(crate) domains: std::option::Option<std::vec::Vec<std::string::String>>,
pub(crate) enable_ssl: std::option::Option<bool>,
pub(crate) ssl_configuration: std::option::Option<crate::model::SslConfiguration>,
pub(crate) attributes: std::option::Option<
std::collections::HashMap<crate::model::AppAttributesKeys, std::string::String>,
>,
pub(crate) environment:
std::option::Option<std::vec::Vec<crate::model::EnvironmentVariable>>,
}
impl Builder {
/// <p>The stack ID.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// <p>The app's short name.</p>
pub fn shortname(mut self, input: impl Into<std::string::String>) -> Self {
self.shortname = Some(input.into());
self
}
/// <p>The app's short name.</p>
pub fn set_shortname(mut self, input: std::option::Option<std::string::String>) -> Self {
self.shortname = input;
self
}
/// <p>The app name.</p>
pub fn name(mut self, input: impl Into<std::string::String>) -> Self {
self.name = Some(input.into());
self
}
/// <p>The app name.</p>
pub fn set_name(mut self, input: std::option::Option<std::string::String>) -> Self {
self.name = input;
self
}
/// <p>A description of the app.</p>
pub fn description(mut self, input: impl Into<std::string::String>) -> Self {
self.description = Some(input.into());
self
}
/// <p>A description of the app.</p>
pub fn set_description(mut self, input: std::option::Option<std::string::String>) -> Self {
self.description = input;
self
}
/// Appends an item to `data_sources`.
///
/// To override the contents of this collection use [`set_data_sources`](Self::set_data_sources).
///
/// <p>The app's data source.</p>
pub fn data_sources(mut self, input: impl Into<crate::model::DataSource>) -> Self {
let mut v = self.data_sources.unwrap_or_default();
v.push(input.into());
self.data_sources = Some(v);
self
}
/// <p>The app's data source.</p>
pub fn set_data_sources(
mut self,
input: std::option::Option<std::vec::Vec<crate::model::DataSource>>,
) -> Self {
self.data_sources = input;
self
}
/// <p>The app type. Each supported type is associated with a particular layer. For example, PHP
/// applications are associated with a PHP layer. AWS OpsWorks Stacks deploys an application to those instances
/// that are members of the corresponding layer. If your app isn't one of the standard types, or
/// you prefer to implement your own Deploy recipes, specify <code>other</code>.</p>
pub fn r#type(mut self, input: crate::model::AppType) -> Self {
self.r#type = Some(input);
self
}
/// <p>The app type. Each supported type is associated with a particular layer. For example, PHP
/// applications are associated with a PHP layer. AWS OpsWorks Stacks deploys an application to those instances
/// that are members of the corresponding layer. If your app isn't one of the standard types, or
/// you prefer to implement your own Deploy recipes, specify <code>other</code>.</p>
pub fn set_type(mut self, input: std::option::Option<crate::model::AppType>) -> Self {
self.r#type = input;
self
}
/// <p>A <code>Source</code> object that specifies the app repository.</p>
pub fn app_source(mut self, input: crate::model::Source) -> Self {
self.app_source = Some(input);
self
}
/// <p>A <code>Source</code> object that specifies the app repository.</p>
pub fn set_app_source(mut self, input: std::option::Option<crate::model::Source>) -> Self {
self.app_source = input;
self
}
/// Appends an item to `domains`.
///
/// To override the contents of this collection use [`set_domains`](Self::set_domains).
///
/// <p>The app virtual host settings, with multiple domains separated by commas. For example:
/// <code>'www.example.com, example.com'</code>
/// </p>
pub fn domains(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.domains.unwrap_or_default();
v.push(input.into());
self.domains = Some(v);
self
}
/// <p>The app virtual host settings, with multiple domains separated by commas. For example:
/// <code>'www.example.com, example.com'</code>
/// </p>
pub fn set_domains(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.domains = input;
self
}
/// <p>Whether to enable SSL for the app.</p>
pub fn enable_ssl(mut self, input: bool) -> Self {
self.enable_ssl = Some(input);
self
}
/// <p>Whether to enable SSL for the app.</p>
pub fn set_enable_ssl(mut self, input: std::option::Option<bool>) -> Self {
self.enable_ssl = input;
self
}
/// <p>An <code>SslConfiguration</code> object with the SSL configuration.</p>
pub fn ssl_configuration(mut self, input: crate::model::SslConfiguration) -> Self {
self.ssl_configuration = Some(input);
self
}
/// <p>An <code>SslConfiguration</code> object with the SSL configuration.</p>
pub fn set_ssl_configuration(
mut self,
input: std::option::Option<crate::model::SslConfiguration>,
) -> Self {
self.ssl_configuration = input;
self
}
/// Adds a key-value pair to `attributes`.
///
/// To override the contents of this collection use [`set_attributes`](Self::set_attributes).
///
/// <p>One or more user-defined key/value pairs to be added to the stack attributes.</p>
pub fn attributes(
mut self,
k: impl Into<crate::model::AppAttributesKeys>,
v: impl Into<std::string::String>,
) -> Self {
let mut hash_map = self.attributes.unwrap_or_default();
hash_map.insert(k.into(), v.into());
self.attributes = Some(hash_map);
self
}
/// <p>One or more user-defined key/value pairs to be added to the stack attributes.</p>
pub fn set_attributes(
mut self,
input: std::option::Option<
std::collections::HashMap<crate::model::AppAttributesKeys, std::string::String>,
>,
) -> Self {
self.attributes = input;
self
}
/// Appends an item to `environment`.
///
/// To override the contents of this collection use [`set_environment`](Self::set_environment).
///
/// <p>An array of <code>EnvironmentVariable</code> objects that specify environment variables to be
/// associated with the app. After you deploy the app, these variables are defined on the
/// associated app server instance. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html#workingapps-creating-environment"> Environment Variables</a>.</p>
/// <p>There is no specific limit on the number of environment variables. However, the size of the associated data structure - which includes the variables' names, values, and protected flag values - cannot exceed 20 KB. This limit should accommodate most if not all use cases. Exceeding it will cause an exception with the message, "Environment: is too large (maximum is 20KB)."</p>
/// <note>
/// <p>If you have specified one or more environment variables, you cannot modify the stack's Chef version.</p>
/// </note>
pub fn environment(mut self, input: impl Into<crate::model::EnvironmentVariable>) -> Self {
let mut v = self.environment.unwrap_or_default();
v.push(input.into());
self.environment = Some(v);
self
}
/// <p>An array of <code>EnvironmentVariable</code> objects that specify environment variables to be
/// associated with the app. After you deploy the app, these variables are defined on the
/// associated app server instance. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html#workingapps-creating-environment"> Environment Variables</a>.</p>
/// <p>There is no specific limit on the number of environment variables. However, the size of the associated data structure - which includes the variables' names, values, and protected flag values - cannot exceed 20 KB. This limit should accommodate most if not all use cases. Exceeding it will cause an exception with the message, "Environment: is too large (maximum is 20KB)."</p>
/// <note>
/// <p>If you have specified one or more environment variables, you cannot modify the stack's Chef version.</p>
/// </note>
pub fn set_environment(
mut self,
input: std::option::Option<std::vec::Vec<crate::model::EnvironmentVariable>>,
) -> Self {
self.environment = input;
self
}
/// Consumes the builder and constructs a [`CreateAppInput`](crate::input::CreateAppInput)
pub fn build(
self,
) -> std::result::Result<crate::input::CreateAppInput, aws_smithy_http::operation::BuildError>
{
Ok(crate::input::CreateAppInput {
stack_id: self.stack_id,
shortname: self.shortname,
name: self.name,
description: self.description,
data_sources: self.data_sources,
r#type: self.r#type,
app_source: self.app_source,
domains: self.domains,
enable_ssl: self.enable_ssl,
ssl_configuration: self.ssl_configuration,
attributes: self.attributes,
environment: self.environment,
})
}
}
}
#[doc(hidden)]
pub type CreateAppInputOperationOutputAlias = crate::operation::CreateApp;
#[doc(hidden)]
pub type CreateAppInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl CreateAppInput {
/// Consumes the builder and constructs an Operation<[`CreateApp`](crate::operation::CreateApp)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::CreateApp,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::CreateAppInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::CreateAppInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::CreateAppInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.CreateApp",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_create_app(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op =
aws_smithy_http::operation::Operation::new(request, crate::operation::CreateApp::new())
.with_metadata(aws_smithy_http::operation::Metadata::new(
"CreateApp",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`CreateAppInput`](crate::input::CreateAppInput)
pub fn builder() -> crate::input::create_app_input::Builder {
crate::input::create_app_input::Builder::default()
}
}
/// See [`CreateDeploymentInput`](crate::input::CreateDeploymentInput)
pub mod create_deployment_input {
/// A builder for [`CreateDeploymentInput`](crate::input::CreateDeploymentInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) app_id: std::option::Option<std::string::String>,
pub(crate) instance_ids: std::option::Option<std::vec::Vec<std::string::String>>,
pub(crate) layer_ids: std::option::Option<std::vec::Vec<std::string::String>>,
pub(crate) command: std::option::Option<crate::model::DeploymentCommand>,
pub(crate) comment: std::option::Option<std::string::String>,
pub(crate) custom_json: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The stack ID.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// <p>The app ID. This parameter is required for app deployments, but not for other deployment commands.</p>
pub fn app_id(mut self, input: impl Into<std::string::String>) -> Self {
self.app_id = Some(input.into());
self
}
/// <p>The app ID. This parameter is required for app deployments, but not for other deployment commands.</p>
pub fn set_app_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.app_id = input;
self
}
/// Appends an item to `instance_ids`.
///
/// To override the contents of this collection use [`set_instance_ids`](Self::set_instance_ids).
///
/// <p>The instance IDs for the deployment targets.</p>
pub fn instance_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.instance_ids.unwrap_or_default();
v.push(input.into());
self.instance_ids = Some(v);
self
}
/// <p>The instance IDs for the deployment targets.</p>
pub fn set_instance_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.instance_ids = input;
self
}
/// Appends an item to `layer_ids`.
///
/// To override the contents of this collection use [`set_layer_ids`](Self::set_layer_ids).
///
/// <p>The layer IDs for the deployment targets.</p>
pub fn layer_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.layer_ids.unwrap_or_default();
v.push(input.into());
self.layer_ids = Some(v);
self
}
/// <p>The layer IDs for the deployment targets.</p>
pub fn set_layer_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.layer_ids = input;
self
}
/// <p>A <code>DeploymentCommand</code> object that specifies the deployment command and any
/// associated arguments.</p>
pub fn command(mut self, input: crate::model::DeploymentCommand) -> Self {
self.command = Some(input);
self
}
/// <p>A <code>DeploymentCommand</code> object that specifies the deployment command and any
/// associated arguments.</p>
pub fn set_command(
mut self,
input: std::option::Option<crate::model::DeploymentCommand>,
) -> Self {
self.command = input;
self
}
/// <p>A user-defined comment.</p>
pub fn comment(mut self, input: impl Into<std::string::String>) -> Self {
self.comment = Some(input.into());
self
}
/// <p>A user-defined comment.</p>
pub fn set_comment(mut self, input: std::option::Option<std::string::String>) -> Self {
self.comment = input;
self
}
/// <p>A string that contains user-defined, custom JSON. You can use this parameter to override some corresponding default stack configuration JSON values. The string should be in the following format:</p>
/// <p>
/// <code>"{\"key1\": \"value1\", \"key2\": \"value2\",...}"</code>
/// </p>
/// <p>For more information about custom JSON, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html">Use Custom JSON to
/// Modify the Stack Configuration Attributes</a> and
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-json-override.html">Overriding Attributes With Custom JSON</a>.</p>
pub fn custom_json(mut self, input: impl Into<std::string::String>) -> Self {
self.custom_json = Some(input.into());
self
}
/// <p>A string that contains user-defined, custom JSON. You can use this parameter to override some corresponding default stack configuration JSON values. The string should be in the following format:</p>
/// <p>
/// <code>"{\"key1\": \"value1\", \"key2\": \"value2\",...}"</code>
/// </p>
/// <p>For more information about custom JSON, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html">Use Custom JSON to
/// Modify the Stack Configuration Attributes</a> and
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-json-override.html">Overriding Attributes With Custom JSON</a>.</p>
pub fn set_custom_json(mut self, input: std::option::Option<std::string::String>) -> Self {
self.custom_json = input;
self
}
/// Consumes the builder and constructs a [`CreateDeploymentInput`](crate::input::CreateDeploymentInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::CreateDeploymentInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::CreateDeploymentInput {
stack_id: self.stack_id,
app_id: self.app_id,
instance_ids: self.instance_ids,
layer_ids: self.layer_ids,
command: self.command,
comment: self.comment,
custom_json: self.custom_json,
})
}
}
}
#[doc(hidden)]
pub type CreateDeploymentInputOperationOutputAlias = crate::operation::CreateDeployment;
#[doc(hidden)]
pub type CreateDeploymentInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl CreateDeploymentInput {
/// Consumes the builder and constructs an Operation<[`CreateDeployment`](crate::operation::CreateDeployment)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::CreateDeployment,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::CreateDeploymentInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::CreateDeploymentInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::CreateDeploymentInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.CreateDeployment",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_create_deployment(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::CreateDeployment::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"CreateDeployment",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`CreateDeploymentInput`](crate::input::CreateDeploymentInput)
pub fn builder() -> crate::input::create_deployment_input::Builder {
crate::input::create_deployment_input::Builder::default()
}
}
/// See [`CreateInstanceInput`](crate::input::CreateInstanceInput)
pub mod create_instance_input {
/// A builder for [`CreateInstanceInput`](crate::input::CreateInstanceInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) layer_ids: std::option::Option<std::vec::Vec<std::string::String>>,
pub(crate) instance_type: std::option::Option<std::string::String>,
pub(crate) auto_scaling_type: std::option::Option<crate::model::AutoScalingType>,
pub(crate) hostname: std::option::Option<std::string::String>,
pub(crate) os: std::option::Option<std::string::String>,
pub(crate) ami_id: std::option::Option<std::string::String>,
pub(crate) ssh_key_name: std::option::Option<std::string::String>,
pub(crate) availability_zone: std::option::Option<std::string::String>,
pub(crate) virtualization_type: std::option::Option<std::string::String>,
pub(crate) subnet_id: std::option::Option<std::string::String>,
pub(crate) architecture: std::option::Option<crate::model::Architecture>,
pub(crate) root_device_type: std::option::Option<crate::model::RootDeviceType>,
pub(crate) block_device_mappings:
std::option::Option<std::vec::Vec<crate::model::BlockDeviceMapping>>,
pub(crate) install_updates_on_boot: std::option::Option<bool>,
pub(crate) ebs_optimized: std::option::Option<bool>,
pub(crate) agent_version: std::option::Option<std::string::String>,
pub(crate) tenancy: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The stack ID.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// Appends an item to `layer_ids`.
///
/// To override the contents of this collection use [`set_layer_ids`](Self::set_layer_ids).
///
/// <p>An array that contains the instance's layer IDs.</p>
pub fn layer_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.layer_ids.unwrap_or_default();
v.push(input.into());
self.layer_ids = Some(v);
self
}
/// <p>An array that contains the instance's layer IDs.</p>
pub fn set_layer_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.layer_ids = input;
self
}
/// <p>The instance type, such as <code>t2.micro</code>. For a list of supported instance types,
/// open the stack in the console, choose <b>Instances</b>, and choose <b>+ Instance</b>.
/// The <b>Size</b> list contains the currently supported types. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html">Instance
/// Families and Types</a>. The parameter values that you use to specify the various types are
/// in the <b>API Name</b> column of the <b>Available Instance Types</b> table.</p>
pub fn instance_type(mut self, input: impl Into<std::string::String>) -> Self {
self.instance_type = Some(input.into());
self
}
/// <p>The instance type, such as <code>t2.micro</code>. For a list of supported instance types,
/// open the stack in the console, choose <b>Instances</b>, and choose <b>+ Instance</b>.
/// The <b>Size</b> list contains the currently supported types. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html">Instance
/// Families and Types</a>. The parameter values that you use to specify the various types are
/// in the <b>API Name</b> column of the <b>Available Instance Types</b> table.</p>
pub fn set_instance_type(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.instance_type = input;
self
}
/// <p>For load-based or time-based instances, the type. Windows stacks can use only time-based instances.</p>
pub fn auto_scaling_type(mut self, input: crate::model::AutoScalingType) -> Self {
self.auto_scaling_type = Some(input);
self
}
/// <p>For load-based or time-based instances, the type. Windows stacks can use only time-based instances.</p>
pub fn set_auto_scaling_type(
mut self,
input: std::option::Option<crate::model::AutoScalingType>,
) -> Self {
self.auto_scaling_type = input;
self
}
/// <p>The instance host name.</p>
pub fn hostname(mut self, input: impl Into<std::string::String>) -> Self {
self.hostname = Some(input.into());
self
}
/// <p>The instance host name.</p>
pub fn set_hostname(mut self, input: std::option::Option<std::string::String>) -> Self {
self.hostname = input;
self
}
/// <p>The instance's operating system, which must be set to one of the following.</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux 2016.09</code>,
/// <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>A supported Windows operating system, such as <code>Microsoft Windows Server 2012 R2 Base</code>, <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// <li>
/// <p>A custom AMI: <code>Custom</code>.</p>
/// </li>
/// </ul>
/// <p>For more information about the supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
/// <p>The default option is the current Amazon Linux version. If you set this parameter to
/// <code>Custom</code>, you must use the <a>CreateInstance</a> action's AmiId parameter to
/// specify the custom AMI that you want to use. Block device mappings are not supported if the value is <code>Custom</code>. For more information about supported operating
/// systems, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">Operating Systems</a>For more information about how to use custom AMIs with AWS OpsWorks Stacks, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using
/// Custom AMIs</a>.</p>
pub fn os(mut self, input: impl Into<std::string::String>) -> Self {
self.os = Some(input.into());
self
}
/// <p>The instance's operating system, which must be set to one of the following.</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux 2016.09</code>,
/// <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>A supported Windows operating system, such as <code>Microsoft Windows Server 2012 R2 Base</code>, <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// <li>
/// <p>A custom AMI: <code>Custom</code>.</p>
/// </li>
/// </ul>
/// <p>For more information about the supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
/// <p>The default option is the current Amazon Linux version. If you set this parameter to
/// <code>Custom</code>, you must use the <a>CreateInstance</a> action's AmiId parameter to
/// specify the custom AMI that you want to use. Block device mappings are not supported if the value is <code>Custom</code>. For more information about supported operating
/// systems, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">Operating Systems</a>For more information about how to use custom AMIs with AWS OpsWorks Stacks, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using
/// Custom AMIs</a>.</p>
pub fn set_os(mut self, input: std::option::Option<std::string::String>) -> Self {
self.os = input;
self
}
/// <p>A custom AMI ID to be used to create the instance. The AMI should be based on one of the
/// supported operating systems.
/// For more information, see
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using Custom AMIs</a>.</p>
/// <note>
/// <p>If you specify a custom AMI, you must set <code>Os</code> to <code>Custom</code>.</p>
/// </note>
pub fn ami_id(mut self, input: impl Into<std::string::String>) -> Self {
self.ami_id = Some(input.into());
self
}
/// <p>A custom AMI ID to be used to create the instance. The AMI should be based on one of the
/// supported operating systems.
/// For more information, see
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using Custom AMIs</a>.</p>
/// <note>
/// <p>If you specify a custom AMI, you must set <code>Os</code> to <code>Custom</code>.</p>
/// </note>
pub fn set_ami_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.ami_id = input;
self
}
/// <p>The instance's Amazon EC2 key-pair name.</p>
pub fn ssh_key_name(mut self, input: impl Into<std::string::String>) -> Self {
self.ssh_key_name = Some(input.into());
self
}
/// <p>The instance's Amazon EC2 key-pair name.</p>
pub fn set_ssh_key_name(mut self, input: std::option::Option<std::string::String>) -> Self {
self.ssh_key_name = input;
self
}
/// <p>The instance Availability Zone. For more information, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and Endpoints</a>.</p>
pub fn availability_zone(mut self, input: impl Into<std::string::String>) -> Self {
self.availability_zone = Some(input.into());
self
}
/// <p>The instance Availability Zone. For more information, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and Endpoints</a>.</p>
pub fn set_availability_zone(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.availability_zone = input;
self
}
/// <p>The instance's virtualization type, <code>paravirtual</code> or <code>hvm</code>.</p>
pub fn virtualization_type(mut self, input: impl Into<std::string::String>) -> Self {
self.virtualization_type = Some(input.into());
self
}
/// <p>The instance's virtualization type, <code>paravirtual</code> or <code>hvm</code>.</p>
pub fn set_virtualization_type(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.virtualization_type = input;
self
}
/// <p>The ID of the instance's subnet. If the stack is running in a VPC, you can use this parameter to override the stack's default subnet ID value and direct AWS OpsWorks Stacks to launch the instance in a different subnet.</p>
pub fn subnet_id(mut self, input: impl Into<std::string::String>) -> Self {
self.subnet_id = Some(input.into());
self
}
/// <p>The ID of the instance's subnet. If the stack is running in a VPC, you can use this parameter to override the stack's default subnet ID value and direct AWS OpsWorks Stacks to launch the instance in a different subnet.</p>
pub fn set_subnet_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.subnet_id = input;
self
}
/// <p>The instance architecture. The default option is <code>x86_64</code>. Instance types do not
/// necessarily support both architectures. For a list of the architectures that are supported by
/// the different instance types, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html">Instance Families and
/// Types</a>.</p>
pub fn architecture(mut self, input: crate::model::Architecture) -> Self {
self.architecture = Some(input);
self
}
/// <p>The instance architecture. The default option is <code>x86_64</code>. Instance types do not
/// necessarily support both architectures. For a list of the architectures that are supported by
/// the different instance types, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html">Instance Families and
/// Types</a>.</p>
pub fn set_architecture(
mut self,
input: std::option::Option<crate::model::Architecture>,
) -> Self {
self.architecture = input;
self
}
/// <p>The instance root device type. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device">Storage for the Root Device</a>.</p>
pub fn root_device_type(mut self, input: crate::model::RootDeviceType) -> Self {
self.root_device_type = Some(input);
self
}
/// <p>The instance root device type. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device">Storage for the Root Device</a>.</p>
pub fn set_root_device_type(
mut self,
input: std::option::Option<crate::model::RootDeviceType>,
) -> Self {
self.root_device_type = input;
self
}
/// Appends an item to `block_device_mappings`.
///
/// To override the contents of this collection use [`set_block_device_mappings`](Self::set_block_device_mappings).
///
/// <p>An array of <code>BlockDeviceMapping</code> objects that specify the instance's block
/// devices. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html">Block
/// Device Mapping</a>. Note that block device mappings are not supported for custom AMIs.</p>
pub fn block_device_mappings(
mut self,
input: impl Into<crate::model::BlockDeviceMapping>,
) -> Self {
let mut v = self.block_device_mappings.unwrap_or_default();
v.push(input.into());
self.block_device_mappings = Some(v);
self
}
/// <p>An array of <code>BlockDeviceMapping</code> objects that specify the instance's block
/// devices. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html">Block
/// Device Mapping</a>. Note that block device mappings are not supported for custom AMIs.</p>
pub fn set_block_device_mappings(
mut self,
input: std::option::Option<std::vec::Vec<crate::model::BlockDeviceMapping>>,
) -> Self {
self.block_device_mappings = input;
self
}
/// <p>Whether to install operating system and package updates when the instance boots. The default
/// value is <code>true</code>. To control when updates are installed, set this value to
/// <code>false</code>. You must then update your instances manually by using
/// <a>CreateDeployment</a> to run the <code>update_dependencies</code> stack command or
/// by manually running <code>yum</code> (Amazon Linux) or <code>apt-get</code> (Ubuntu) on the
/// instances. </p>
/// <note>
/// <p>We strongly recommend using the default value of <code>true</code> to ensure that your
/// instances have the latest security updates.</p>
/// </note>
pub fn install_updates_on_boot(mut self, input: bool) -> Self {
self.install_updates_on_boot = Some(input);
self
}
/// <p>Whether to install operating system and package updates when the instance boots. The default
/// value is <code>true</code>. To control when updates are installed, set this value to
/// <code>false</code>. You must then update your instances manually by using
/// <a>CreateDeployment</a> to run the <code>update_dependencies</code> stack command or
/// by manually running <code>yum</code> (Amazon Linux) or <code>apt-get</code> (Ubuntu) on the
/// instances. </p>
/// <note>
/// <p>We strongly recommend using the default value of <code>true</code> to ensure that your
/// instances have the latest security updates.</p>
/// </note>
pub fn set_install_updates_on_boot(mut self, input: std::option::Option<bool>) -> Self {
self.install_updates_on_boot = input;
self
}
/// <p>Whether to create an Amazon EBS-optimized instance.</p>
pub fn ebs_optimized(mut self, input: bool) -> Self {
self.ebs_optimized = Some(input);
self
}
/// <p>Whether to create an Amazon EBS-optimized instance.</p>
pub fn set_ebs_optimized(mut self, input: std::option::Option<bool>) -> Self {
self.ebs_optimized = input;
self
}
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>
/// <code>INHERIT</code> - Use the stack's default agent version setting.</p>
/// </li>
/// <li>
/// <p>
/// <i>version_number</i> - Use the specified agent version.
/// This value overrides the stack's default setting.
/// To update the agent version, edit the instance configuration and specify a
/// new version.
/// AWS OpsWorks Stacks then automatically installs that version on the instance.</p>
/// </li>
/// </ul>
/// <p>The default setting is <code>INHERIT</code>. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>. AgentVersion cannot be set to Chef 12.2.</p>
pub fn agent_version(mut self, input: impl Into<std::string::String>) -> Self {
self.agent_version = Some(input.into());
self
}
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>
/// <code>INHERIT</code> - Use the stack's default agent version setting.</p>
/// </li>
/// <li>
/// <p>
/// <i>version_number</i> - Use the specified agent version.
/// This value overrides the stack's default setting.
/// To update the agent version, edit the instance configuration and specify a
/// new version.
/// AWS OpsWorks Stacks then automatically installs that version on the instance.</p>
/// </li>
/// </ul>
/// <p>The default setting is <code>INHERIT</code>. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>. AgentVersion cannot be set to Chef 12.2.</p>
pub fn set_agent_version(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.agent_version = input;
self
}
/// <p>The instance's tenancy option. The default option is no tenancy, or if the instance is running in a VPC, inherit tenancy settings from the VPC. The following are valid values for this parameter: <code>dedicated</code>, <code>default</code>, or <code>host</code>. Because there are costs associated with changes in tenancy options, we recommend that you research tenancy options before choosing them for your instances. For more information about dedicated hosts, see <a href="http://aws.amazon.com/ec2/dedicated-hosts/">Dedicated Hosts Overview</a> and <a href="http://aws.amazon.com/ec2/dedicated-hosts/">Amazon EC2 Dedicated Hosts</a>. For more information about dedicated instances, see <a href="https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/dedicated-instance.html">Dedicated Instances</a> and <a href="http://aws.amazon.com/ec2/purchasing-options/dedicated-instances/">Amazon EC2 Dedicated Instances</a>.</p>
pub fn tenancy(mut self, input: impl Into<std::string::String>) -> Self {
self.tenancy = Some(input.into());
self
}
/// <p>The instance's tenancy option. The default option is no tenancy, or if the instance is running in a VPC, inherit tenancy settings from the VPC. The following are valid values for this parameter: <code>dedicated</code>, <code>default</code>, or <code>host</code>. Because there are costs associated with changes in tenancy options, we recommend that you research tenancy options before choosing them for your instances. For more information about dedicated hosts, see <a href="http://aws.amazon.com/ec2/dedicated-hosts/">Dedicated Hosts Overview</a> and <a href="http://aws.amazon.com/ec2/dedicated-hosts/">Amazon EC2 Dedicated Hosts</a>. For more information about dedicated instances, see <a href="https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/dedicated-instance.html">Dedicated Instances</a> and <a href="http://aws.amazon.com/ec2/purchasing-options/dedicated-instances/">Amazon EC2 Dedicated Instances</a>.</p>
pub fn set_tenancy(mut self, input: std::option::Option<std::string::String>) -> Self {
self.tenancy = input;
self
}
/// Consumes the builder and constructs a [`CreateInstanceInput`](crate::input::CreateInstanceInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::CreateInstanceInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::CreateInstanceInput {
stack_id: self.stack_id,
layer_ids: self.layer_ids,
instance_type: self.instance_type,
auto_scaling_type: self.auto_scaling_type,
hostname: self.hostname,
os: self.os,
ami_id: self.ami_id,
ssh_key_name: self.ssh_key_name,
availability_zone: self.availability_zone,
virtualization_type: self.virtualization_type,
subnet_id: self.subnet_id,
architecture: self.architecture,
root_device_type: self.root_device_type,
block_device_mappings: self.block_device_mappings,
install_updates_on_boot: self.install_updates_on_boot,
ebs_optimized: self.ebs_optimized,
agent_version: self.agent_version,
tenancy: self.tenancy,
})
}
}
}
#[doc(hidden)]
pub type CreateInstanceInputOperationOutputAlias = crate::operation::CreateInstance;
#[doc(hidden)]
pub type CreateInstanceInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl CreateInstanceInput {
/// Consumes the builder and constructs an Operation<[`CreateInstance`](crate::operation::CreateInstance)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::CreateInstance,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::CreateInstanceInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::CreateInstanceInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::CreateInstanceInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.CreateInstance",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_create_instance(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::CreateInstance::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"CreateInstance",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`CreateInstanceInput`](crate::input::CreateInstanceInput)
pub fn builder() -> crate::input::create_instance_input::Builder {
crate::input::create_instance_input::Builder::default()
}
}
/// See [`CreateLayerInput`](crate::input::CreateLayerInput)
pub mod create_layer_input {
/// A builder for [`CreateLayerInput`](crate::input::CreateLayerInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) r#type: std::option::Option<crate::model::LayerType>,
pub(crate) name: std::option::Option<std::string::String>,
pub(crate) shortname: std::option::Option<std::string::String>,
pub(crate) attributes: std::option::Option<
std::collections::HashMap<crate::model::LayerAttributesKeys, std::string::String>,
>,
pub(crate) cloud_watch_logs_configuration:
std::option::Option<crate::model::CloudWatchLogsConfiguration>,
pub(crate) custom_instance_profile_arn: std::option::Option<std::string::String>,
pub(crate) custom_json: std::option::Option<std::string::String>,
pub(crate) custom_security_group_ids:
std::option::Option<std::vec::Vec<std::string::String>>,
pub(crate) packages: std::option::Option<std::vec::Vec<std::string::String>>,
pub(crate) volume_configurations:
std::option::Option<std::vec::Vec<crate::model::VolumeConfiguration>>,
pub(crate) enable_auto_healing: std::option::Option<bool>,
pub(crate) auto_assign_elastic_ips: std::option::Option<bool>,
pub(crate) auto_assign_public_ips: std::option::Option<bool>,
pub(crate) custom_recipes: std::option::Option<crate::model::Recipes>,
pub(crate) install_updates_on_boot: std::option::Option<bool>,
pub(crate) use_ebs_optimized_instances: std::option::Option<bool>,
pub(crate) lifecycle_event_configuration:
std::option::Option<crate::model::LifecycleEventConfiguration>,
}
impl Builder {
/// <p>The layer stack ID.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The layer stack ID.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// <p>The layer type. A stack cannot have more than one built-in layer of the same type. It can have any number of custom layers. Built-in layers are not available in Chef 12 stacks.</p>
pub fn r#type(mut self, input: crate::model::LayerType) -> Self {
self.r#type = Some(input);
self
}
/// <p>The layer type. A stack cannot have more than one built-in layer of the same type. It can have any number of custom layers. Built-in layers are not available in Chef 12 stacks.</p>
pub fn set_type(mut self, input: std::option::Option<crate::model::LayerType>) -> Self {
self.r#type = input;
self
}
/// <p>The layer name, which is used by the console.</p>
pub fn name(mut self, input: impl Into<std::string::String>) -> Self {
self.name = Some(input.into());
self
}
/// <p>The layer name, which is used by the console.</p>
pub fn set_name(mut self, input: std::option::Option<std::string::String>) -> Self {
self.name = input;
self
}
/// <p>For custom layers only, use this parameter to specify the layer's short name, which is used internally by AWS OpsWorks Stacks and by Chef recipes. The short name is also used as the name for the directory where your app files are installed. It can have a maximum of 200 characters, which are limited to the alphanumeric characters, '-', '_', and '.'.</p>
/// <p>The built-in layers' short names are defined by AWS OpsWorks Stacks. For more information, see the <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/layers.html">Layer Reference</a>.</p>
pub fn shortname(mut self, input: impl Into<std::string::String>) -> Self {
self.shortname = Some(input.into());
self
}
/// <p>For custom layers only, use this parameter to specify the layer's short name, which is used internally by AWS OpsWorks Stacks and by Chef recipes. The short name is also used as the name for the directory where your app files are installed. It can have a maximum of 200 characters, which are limited to the alphanumeric characters, '-', '_', and '.'.</p>
/// <p>The built-in layers' short names are defined by AWS OpsWorks Stacks. For more information, see the <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/layers.html">Layer Reference</a>.</p>
pub fn set_shortname(mut self, input: std::option::Option<std::string::String>) -> Self {
self.shortname = input;
self
}
/// Adds a key-value pair to `attributes`.
///
/// To override the contents of this collection use [`set_attributes`](Self::set_attributes).
///
/// <p>One or more user-defined key-value pairs to be added to the stack attributes.</p>
/// <p>To create a cluster layer, set the <code>EcsClusterArn</code> attribute to the cluster's ARN.</p>
pub fn attributes(
mut self,
k: impl Into<crate::model::LayerAttributesKeys>,
v: impl Into<std::string::String>,
) -> Self {
let mut hash_map = self.attributes.unwrap_or_default();
hash_map.insert(k.into(), v.into());
self.attributes = Some(hash_map);
self
}
/// <p>One or more user-defined key-value pairs to be added to the stack attributes.</p>
/// <p>To create a cluster layer, set the <code>EcsClusterArn</code> attribute to the cluster's ARN.</p>
pub fn set_attributes(
mut self,
input: std::option::Option<
std::collections::HashMap<crate::model::LayerAttributesKeys, std::string::String>,
>,
) -> Self {
self.attributes = input;
self
}
/// <p>Specifies CloudWatch Logs configuration options for the layer. For more information, see <a>CloudWatchLogsLogStream</a>.</p>
pub fn cloud_watch_logs_configuration(
mut self,
input: crate::model::CloudWatchLogsConfiguration,
) -> Self {
self.cloud_watch_logs_configuration = Some(input);
self
}
/// <p>Specifies CloudWatch Logs configuration options for the layer. For more information, see <a>CloudWatchLogsLogStream</a>.</p>
pub fn set_cloud_watch_logs_configuration(
mut self,
input: std::option::Option<crate::model::CloudWatchLogsConfiguration>,
) -> Self {
self.cloud_watch_logs_configuration = input;
self
}
/// <p>The ARN of an IAM profile to be used for the layer's EC2 instances. For more information
/// about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using Identifiers</a>.</p>
pub fn custom_instance_profile_arn(
mut self,
input: impl Into<std::string::String>,
) -> Self {
self.custom_instance_profile_arn = Some(input.into());
self
}
/// <p>The ARN of an IAM profile to be used for the layer's EC2 instances. For more information
/// about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using Identifiers</a>.</p>
pub fn set_custom_instance_profile_arn(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.custom_instance_profile_arn = input;
self
}
/// <p>A JSON-formatted string containing custom stack configuration and deployment attributes
/// to be installed on the layer's instances. For more information, see
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-json-override.html">
/// Using Custom JSON</a>. This feature is supported as of version 1.7.42 of the AWS CLI.
/// </p>
pub fn custom_json(mut self, input: impl Into<std::string::String>) -> Self {
self.custom_json = Some(input.into());
self
}
/// <p>A JSON-formatted string containing custom stack configuration and deployment attributes
/// to be installed on the layer's instances. For more information, see
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-json-override.html">
/// Using Custom JSON</a>. This feature is supported as of version 1.7.42 of the AWS CLI.
/// </p>
pub fn set_custom_json(mut self, input: std::option::Option<std::string::String>) -> Self {
self.custom_json = input;
self
}
/// Appends an item to `custom_security_group_ids`.
///
/// To override the contents of this collection use [`set_custom_security_group_ids`](Self::set_custom_security_group_ids).
///
/// <p>An array containing the layer custom security group IDs.</p>
pub fn custom_security_group_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.custom_security_group_ids.unwrap_or_default();
v.push(input.into());
self.custom_security_group_ids = Some(v);
self
}
/// <p>An array containing the layer custom security group IDs.</p>
pub fn set_custom_security_group_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.custom_security_group_ids = input;
self
}
/// Appends an item to `packages`.
///
/// To override the contents of this collection use [`set_packages`](Self::set_packages).
///
/// <p>An array of <code>Package</code> objects that describes the layer packages.</p>
pub fn packages(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.packages.unwrap_or_default();
v.push(input.into());
self.packages = Some(v);
self
}
/// <p>An array of <code>Package</code> objects that describes the layer packages.</p>
pub fn set_packages(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.packages = input;
self
}
/// Appends an item to `volume_configurations`.
///
/// To override the contents of this collection use [`set_volume_configurations`](Self::set_volume_configurations).
///
/// <p>A <code>VolumeConfigurations</code> object that describes the layer's Amazon EBS volumes.</p>
pub fn volume_configurations(
mut self,
input: impl Into<crate::model::VolumeConfiguration>,
) -> Self {
let mut v = self.volume_configurations.unwrap_or_default();
v.push(input.into());
self.volume_configurations = Some(v);
self
}
/// <p>A <code>VolumeConfigurations</code> object that describes the layer's Amazon EBS volumes.</p>
pub fn set_volume_configurations(
mut self,
input: std::option::Option<std::vec::Vec<crate::model::VolumeConfiguration>>,
) -> Self {
self.volume_configurations = input;
self
}
/// <p>Whether to disable auto healing for the layer.</p>
pub fn enable_auto_healing(mut self, input: bool) -> Self {
self.enable_auto_healing = Some(input);
self
}
/// <p>Whether to disable auto healing for the layer.</p>
pub fn set_enable_auto_healing(mut self, input: std::option::Option<bool>) -> Self {
self.enable_auto_healing = input;
self
}
/// <p>Whether to automatically assign an <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html">Elastic IP
/// address</a> to the layer's instances. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html">How to Edit
/// a Layer</a>.</p>
pub fn auto_assign_elastic_ips(mut self, input: bool) -> Self {
self.auto_assign_elastic_ips = Some(input);
self
}
/// <p>Whether to automatically assign an <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html">Elastic IP
/// address</a> to the layer's instances. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html">How to Edit
/// a Layer</a>.</p>
pub fn set_auto_assign_elastic_ips(mut self, input: std::option::Option<bool>) -> Self {
self.auto_assign_elastic_ips = input;
self
}
/// <p>For stacks that are running in a VPC, whether to automatically assign a public IP address to
/// the layer's instances. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html">How to Edit
/// a Layer</a>.</p>
pub fn auto_assign_public_ips(mut self, input: bool) -> Self {
self.auto_assign_public_ips = Some(input);
self
}
/// <p>For stacks that are running in a VPC, whether to automatically assign a public IP address to
/// the layer's instances. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html">How to Edit
/// a Layer</a>.</p>
pub fn set_auto_assign_public_ips(mut self, input: std::option::Option<bool>) -> Self {
self.auto_assign_public_ips = input;
self
}
/// <p>A <code>LayerCustomRecipes</code> object that specifies the layer custom recipes.</p>
pub fn custom_recipes(mut self, input: crate::model::Recipes) -> Self {
self.custom_recipes = Some(input);
self
}
/// <p>A <code>LayerCustomRecipes</code> object that specifies the layer custom recipes.</p>
pub fn set_custom_recipes(
mut self,
input: std::option::Option<crate::model::Recipes>,
) -> Self {
self.custom_recipes = input;
self
}
/// <p>Whether to install operating system and package updates when the instance boots. The default
/// value is <code>true</code>. To control when updates are installed, set this value to
/// <code>false</code>. You must then update your instances manually by using
/// <a>CreateDeployment</a> to run the <code>update_dependencies</code> stack command or
/// by manually running <code>yum</code> (Amazon Linux) or <code>apt-get</code> (Ubuntu) on the
/// instances. </p>
/// <note>
/// <p>To ensure that your
/// instances have the latest security updates, we strongly recommend using the default value of <code>true</code>.</p>
/// </note>
pub fn install_updates_on_boot(mut self, input: bool) -> Self {
self.install_updates_on_boot = Some(input);
self
}
/// <p>Whether to install operating system and package updates when the instance boots. The default
/// value is <code>true</code>. To control when updates are installed, set this value to
/// <code>false</code>. You must then update your instances manually by using
/// <a>CreateDeployment</a> to run the <code>update_dependencies</code> stack command or
/// by manually running <code>yum</code> (Amazon Linux) or <code>apt-get</code> (Ubuntu) on the
/// instances. </p>
/// <note>
/// <p>To ensure that your
/// instances have the latest security updates, we strongly recommend using the default value of <code>true</code>.</p>
/// </note>
pub fn set_install_updates_on_boot(mut self, input: std::option::Option<bool>) -> Self {
self.install_updates_on_boot = input;
self
}
/// <p>Whether to use Amazon EBS-optimized instances.</p>
pub fn use_ebs_optimized_instances(mut self, input: bool) -> Self {
self.use_ebs_optimized_instances = Some(input);
self
}
/// <p>Whether to use Amazon EBS-optimized instances.</p>
pub fn set_use_ebs_optimized_instances(mut self, input: std::option::Option<bool>) -> Self {
self.use_ebs_optimized_instances = input;
self
}
/// <p>A <code>LifeCycleEventConfiguration</code> object that you can use to configure the Shutdown event to
/// specify an execution timeout and enable or disable Elastic Load Balancer connection
/// draining.</p>
pub fn lifecycle_event_configuration(
mut self,
input: crate::model::LifecycleEventConfiguration,
) -> Self {
self.lifecycle_event_configuration = Some(input);
self
}
/// <p>A <code>LifeCycleEventConfiguration</code> object that you can use to configure the Shutdown event to
/// specify an execution timeout and enable or disable Elastic Load Balancer connection
/// draining.</p>
pub fn set_lifecycle_event_configuration(
mut self,
input: std::option::Option<crate::model::LifecycleEventConfiguration>,
) -> Self {
self.lifecycle_event_configuration = input;
self
}
/// Consumes the builder and constructs a [`CreateLayerInput`](crate::input::CreateLayerInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::CreateLayerInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::CreateLayerInput {
stack_id: self.stack_id,
r#type: self.r#type,
name: self.name,
shortname: self.shortname,
attributes: self.attributes,
cloud_watch_logs_configuration: self.cloud_watch_logs_configuration,
custom_instance_profile_arn: self.custom_instance_profile_arn,
custom_json: self.custom_json,
custom_security_group_ids: self.custom_security_group_ids,
packages: self.packages,
volume_configurations: self.volume_configurations,
enable_auto_healing: self.enable_auto_healing,
auto_assign_elastic_ips: self.auto_assign_elastic_ips,
auto_assign_public_ips: self.auto_assign_public_ips,
custom_recipes: self.custom_recipes,
install_updates_on_boot: self.install_updates_on_boot,
use_ebs_optimized_instances: self.use_ebs_optimized_instances,
lifecycle_event_configuration: self.lifecycle_event_configuration,
})
}
}
}
#[doc(hidden)]
pub type CreateLayerInputOperationOutputAlias = crate::operation::CreateLayer;
#[doc(hidden)]
pub type CreateLayerInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl CreateLayerInput {
/// Consumes the builder and constructs an Operation<[`CreateLayer`](crate::operation::CreateLayer)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::CreateLayer,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::CreateLayerInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::CreateLayerInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::CreateLayerInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.CreateLayer",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_create_layer(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::CreateLayer::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"CreateLayer",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`CreateLayerInput`](crate::input::CreateLayerInput)
pub fn builder() -> crate::input::create_layer_input::Builder {
crate::input::create_layer_input::Builder::default()
}
}
/// See [`CreateStackInput`](crate::input::CreateStackInput)
pub mod create_stack_input {
/// A builder for [`CreateStackInput`](crate::input::CreateStackInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) name: std::option::Option<std::string::String>,
pub(crate) region: std::option::Option<std::string::String>,
pub(crate) vpc_id: std::option::Option<std::string::String>,
pub(crate) attributes: std::option::Option<
std::collections::HashMap<crate::model::StackAttributesKeys, std::string::String>,
>,
pub(crate) service_role_arn: std::option::Option<std::string::String>,
pub(crate) default_instance_profile_arn: std::option::Option<std::string::String>,
pub(crate) default_os: std::option::Option<std::string::String>,
pub(crate) hostname_theme: std::option::Option<std::string::String>,
pub(crate) default_availability_zone: std::option::Option<std::string::String>,
pub(crate) default_subnet_id: std::option::Option<std::string::String>,
pub(crate) custom_json: std::option::Option<std::string::String>,
pub(crate) configuration_manager:
std::option::Option<crate::model::StackConfigurationManager>,
pub(crate) chef_configuration: std::option::Option<crate::model::ChefConfiguration>,
pub(crate) use_custom_cookbooks: std::option::Option<bool>,
pub(crate) use_opsworks_security_groups: std::option::Option<bool>,
pub(crate) custom_cookbooks_source: std::option::Option<crate::model::Source>,
pub(crate) default_ssh_key_name: std::option::Option<std::string::String>,
pub(crate) default_root_device_type: std::option::Option<crate::model::RootDeviceType>,
pub(crate) agent_version: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The stack name.</p>
pub fn name(mut self, input: impl Into<std::string::String>) -> Self {
self.name = Some(input.into());
self
}
/// <p>The stack name.</p>
pub fn set_name(mut self, input: std::option::Option<std::string::String>) -> Self {
self.name = input;
self
}
/// <p>The stack's AWS region, such as <code>ap-south-1</code>. For more information about
/// Amazon regions, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and Endpoints</a>.</p>
/// <note>
/// <p>In the AWS CLI, this API maps to the <code>--stack-region</code> parameter. If the
/// <code>--stack-region</code> parameter and the AWS CLI common parameter
/// <code>--region</code> are set to the same value, the stack uses a
/// <i>regional</i> endpoint. If the <code>--stack-region</code>
/// parameter is not set, but the AWS CLI <code>--region</code> parameter is, this also
/// results in a stack with a <i>regional</i> endpoint. However, if the
/// <code>--region</code> parameter is set to <code>us-east-1</code>, and the
/// <code>--stack-region</code> parameter is set to one of the following, then the
/// stack uses a legacy or <i>classic</i> region: <code>us-west-1,
/// us-west-2, sa-east-1, eu-central-1, eu-west-1, ap-northeast-1, ap-southeast-1,
/// ap-southeast-2</code>. In this case, the actual API endpoint of the stack is in
/// <code>us-east-1</code>. Only the preceding regions are supported as classic
/// regions in the <code>us-east-1</code> API endpoint. Because it is a best practice to
/// choose the regional endpoint that is closest to where you manage AWS, we recommend
/// that you use regional endpoints for new stacks. The AWS CLI common
/// <code>--region</code> parameter always specifies a regional API endpoint; it
/// cannot be used to specify a classic AWS OpsWorks Stacks region.</p>
/// </note>
pub fn region(mut self, input: impl Into<std::string::String>) -> Self {
self.region = Some(input.into());
self
}
/// <p>The stack's AWS region, such as <code>ap-south-1</code>. For more information about
/// Amazon regions, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and Endpoints</a>.</p>
/// <note>
/// <p>In the AWS CLI, this API maps to the <code>--stack-region</code> parameter. If the
/// <code>--stack-region</code> parameter and the AWS CLI common parameter
/// <code>--region</code> are set to the same value, the stack uses a
/// <i>regional</i> endpoint. If the <code>--stack-region</code>
/// parameter is not set, but the AWS CLI <code>--region</code> parameter is, this also
/// results in a stack with a <i>regional</i> endpoint. However, if the
/// <code>--region</code> parameter is set to <code>us-east-1</code>, and the
/// <code>--stack-region</code> parameter is set to one of the following, then the
/// stack uses a legacy or <i>classic</i> region: <code>us-west-1,
/// us-west-2, sa-east-1, eu-central-1, eu-west-1, ap-northeast-1, ap-southeast-1,
/// ap-southeast-2</code>. In this case, the actual API endpoint of the stack is in
/// <code>us-east-1</code>. Only the preceding regions are supported as classic
/// regions in the <code>us-east-1</code> API endpoint. Because it is a best practice to
/// choose the regional endpoint that is closest to where you manage AWS, we recommend
/// that you use regional endpoints for new stacks. The AWS CLI common
/// <code>--region</code> parameter always specifies a regional API endpoint; it
/// cannot be used to specify a classic AWS OpsWorks Stacks region.</p>
/// </note>
pub fn set_region(mut self, input: std::option::Option<std::string::String>) -> Self {
self.region = input;
self
}
/// <p>The ID of the VPC that the stack is to be launched into. The VPC must be in the stack's region. All instances are launched into this VPC. You cannot change the ID later.</p>
/// <ul>
/// <li>
/// <p>If your account supports EC2-Classic, the default value is <code>no VPC</code>.</p>
/// </li>
/// <li>
/// <p>If your account does not support EC2-Classic, the default value is the default VPC for the specified region.</p>
/// </li>
/// </ul>
/// <p>If the VPC ID corresponds to a default VPC and you have specified either the
/// <code>DefaultAvailabilityZone</code> or the <code>DefaultSubnetId</code> parameter only,
/// AWS OpsWorks Stacks infers the value of the
/// other parameter. If you specify neither parameter, AWS OpsWorks Stacks sets
/// these parameters to the first valid Availability Zone for the specified region and the
/// corresponding default VPC subnet ID, respectively.</p>
/// <p>If you specify a nondefault VPC ID, note the following:</p>
/// <ul>
/// <li>
/// <p>It must belong to a VPC in your account that is in the specified region.</p>
/// </li>
/// <li>
/// <p>You must specify a value for <code>DefaultSubnetId</code>.</p>
/// </li>
/// </ul>
/// <p>For more information about how to use AWS OpsWorks Stacks with a VPC, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-vpc.html">Running a Stack in a
/// VPC</a>. For more information about default VPC and EC2-Classic, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html">Supported
/// Platforms</a>. </p>
pub fn vpc_id(mut self, input: impl Into<std::string::String>) -> Self {
self.vpc_id = Some(input.into());
self
}
/// <p>The ID of the VPC that the stack is to be launched into. The VPC must be in the stack's region. All instances are launched into this VPC. You cannot change the ID later.</p>
/// <ul>
/// <li>
/// <p>If your account supports EC2-Classic, the default value is <code>no VPC</code>.</p>
/// </li>
/// <li>
/// <p>If your account does not support EC2-Classic, the default value is the default VPC for the specified region.</p>
/// </li>
/// </ul>
/// <p>If the VPC ID corresponds to a default VPC and you have specified either the
/// <code>DefaultAvailabilityZone</code> or the <code>DefaultSubnetId</code> parameter only,
/// AWS OpsWorks Stacks infers the value of the
/// other parameter. If you specify neither parameter, AWS OpsWorks Stacks sets
/// these parameters to the first valid Availability Zone for the specified region and the
/// corresponding default VPC subnet ID, respectively.</p>
/// <p>If you specify a nondefault VPC ID, note the following:</p>
/// <ul>
/// <li>
/// <p>It must belong to a VPC in your account that is in the specified region.</p>
/// </li>
/// <li>
/// <p>You must specify a value for <code>DefaultSubnetId</code>.</p>
/// </li>
/// </ul>
/// <p>For more information about how to use AWS OpsWorks Stacks with a VPC, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-vpc.html">Running a Stack in a
/// VPC</a>. For more information about default VPC and EC2-Classic, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html">Supported
/// Platforms</a>. </p>
pub fn set_vpc_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.vpc_id = input;
self
}
/// Adds a key-value pair to `attributes`.
///
/// To override the contents of this collection use [`set_attributes`](Self::set_attributes).
///
/// <p>One or more user-defined key-value pairs to be added to the stack attributes.</p>
pub fn attributes(
mut self,
k: impl Into<crate::model::StackAttributesKeys>,
v: impl Into<std::string::String>,
) -> Self {
let mut hash_map = self.attributes.unwrap_or_default();
hash_map.insert(k.into(), v.into());
self.attributes = Some(hash_map);
self
}
/// <p>One or more user-defined key-value pairs to be added to the stack attributes.</p>
pub fn set_attributes(
mut self,
input: std::option::Option<
std::collections::HashMap<crate::model::StackAttributesKeys, std::string::String>,
>,
) -> Self {
self.attributes = input;
self
}
/// <p>The stack's AWS Identity and Access Management (IAM) role, which allows AWS OpsWorks Stacks to work with AWS
/// resources on your behalf. You must set this parameter to the Amazon Resource Name (ARN) for an
/// existing IAM role. For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub fn service_role_arn(mut self, input: impl Into<std::string::String>) -> Self {
self.service_role_arn = Some(input.into());
self
}
/// <p>The stack's AWS Identity and Access Management (IAM) role, which allows AWS OpsWorks Stacks to work with AWS
/// resources on your behalf. You must set this parameter to the Amazon Resource Name (ARN) for an
/// existing IAM role. For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub fn set_service_role_arn(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.service_role_arn = input;
self
}
/// <p>The Amazon Resource Name (ARN) of an IAM profile that is the default profile for all of the stack's EC2 instances.
/// For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub fn default_instance_profile_arn(
mut self,
input: impl Into<std::string::String>,
) -> Self {
self.default_instance_profile_arn = Some(input.into());
self
}
/// <p>The Amazon Resource Name (ARN) of an IAM profile that is the default profile for all of the stack's EC2 instances.
/// For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub fn set_default_instance_profile_arn(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.default_instance_profile_arn = input;
self
}
/// <p>The stack's default operating system, which is installed on every instance unless you specify a different operating system when you create the instance. You can specify one of the following.</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux 2016.09</code>,
/// <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>A supported Windows operating system, such as <code>Microsoft Windows Server 2012 R2 Base</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// <li>
/// <p>A custom AMI: <code>Custom</code>. You specify the custom AMI you want to use when
/// you create instances. For more
/// information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">
/// Using Custom AMIs</a>.</p>
/// </li>
/// </ul>
/// <p>The default option is the current Amazon Linux version.
/// For more information about supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
pub fn default_os(mut self, input: impl Into<std::string::String>) -> Self {
self.default_os = Some(input.into());
self
}
/// <p>The stack's default operating system, which is installed on every instance unless you specify a different operating system when you create the instance. You can specify one of the following.</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux 2016.09</code>,
/// <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>A supported Windows operating system, such as <code>Microsoft Windows Server 2012 R2 Base</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// <li>
/// <p>A custom AMI: <code>Custom</code>. You specify the custom AMI you want to use when
/// you create instances. For more
/// information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">
/// Using Custom AMIs</a>.</p>
/// </li>
/// </ul>
/// <p>The default option is the current Amazon Linux version.
/// For more information about supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
pub fn set_default_os(mut self, input: std::option::Option<std::string::String>) -> Self {
self.default_os = input;
self
}
/// <p>The stack's host name theme, with spaces replaced by underscores. The theme is used to
/// generate host names for the stack's instances. By default, <code>HostnameTheme</code> is set
/// to <code>Layer_Dependent</code>, which creates host names by appending integers to the layer's
/// short name. The other themes are:</p>
/// <ul>
/// <li>
/// <p>
/// <code>Baked_Goods</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Clouds</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Europe_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Fruits</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Greek_Deities_and_Titans</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Legendary_creatures_from_Japan</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Planets_and_Moons</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Roman_Deities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Scottish_Islands</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>US_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Wild_Cats</code>
/// </p>
/// </li>
/// </ul>
/// <p>To obtain a generated host name, call <code>GetHostNameSuggestion</code>, which returns a
/// host name based on the current theme.</p>
pub fn hostname_theme(mut self, input: impl Into<std::string::String>) -> Self {
self.hostname_theme = Some(input.into());
self
}
/// <p>The stack's host name theme, with spaces replaced by underscores. The theme is used to
/// generate host names for the stack's instances. By default, <code>HostnameTheme</code> is set
/// to <code>Layer_Dependent</code>, which creates host names by appending integers to the layer's
/// short name. The other themes are:</p>
/// <ul>
/// <li>
/// <p>
/// <code>Baked_Goods</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Clouds</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Europe_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Fruits</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Greek_Deities_and_Titans</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Legendary_creatures_from_Japan</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Planets_and_Moons</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Roman_Deities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Scottish_Islands</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>US_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Wild_Cats</code>
/// </p>
/// </li>
/// </ul>
/// <p>To obtain a generated host name, call <code>GetHostNameSuggestion</code>, which returns a
/// host name based on the current theme.</p>
pub fn set_hostname_theme(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.hostname_theme = input;
self
}
/// <p>The stack's default Availability Zone, which must be in the specified region. For more
/// information, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and
/// Endpoints</a>. If you also specify a value for <code>DefaultSubnetId</code>, the subnet must
/// be in the same zone. For more information, see the <code>VpcId</code> parameter description.
/// </p>
pub fn default_availability_zone(mut self, input: impl Into<std::string::String>) -> Self {
self.default_availability_zone = Some(input.into());
self
}
/// <p>The stack's default Availability Zone, which must be in the specified region. For more
/// information, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and
/// Endpoints</a>. If you also specify a value for <code>DefaultSubnetId</code>, the subnet must
/// be in the same zone. For more information, see the <code>VpcId</code> parameter description.
/// </p>
pub fn set_default_availability_zone(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.default_availability_zone = input;
self
}
/// <p>The stack's default VPC subnet ID. This parameter is required if you specify a value for the
/// <code>VpcId</code> parameter. All instances are launched into this subnet unless you specify
/// otherwise when you create the instance. If you also specify a value for
/// <code>DefaultAvailabilityZone</code>, the subnet must be in that zone. For information on
/// default values and when this parameter is required, see the <code>VpcId</code> parameter
/// description. </p>
pub fn default_subnet_id(mut self, input: impl Into<std::string::String>) -> Self {
self.default_subnet_id = Some(input.into());
self
}
/// <p>The stack's default VPC subnet ID. This parameter is required if you specify a value for the
/// <code>VpcId</code> parameter. All instances are launched into this subnet unless you specify
/// otherwise when you create the instance. If you also specify a value for
/// <code>DefaultAvailabilityZone</code>, the subnet must be in that zone. For information on
/// default values and when this parameter is required, see the <code>VpcId</code> parameter
/// description. </p>
pub fn set_default_subnet_id(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.default_subnet_id = input;
self
}
/// <p>A string that contains user-defined, custom JSON. It can be used to override the corresponding default stack configuration attribute values or to pass data to recipes. The string should be in the following format:</p>
/// <p>
/// <code>"{\"key1\": \"value1\", \"key2\": \"value2\",...}"</code>
/// </p>
/// <p>For more information about custom JSON, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html">Use Custom JSON to
/// Modify the Stack Configuration Attributes</a>.</p>
pub fn custom_json(mut self, input: impl Into<std::string::String>) -> Self {
self.custom_json = Some(input.into());
self
}
/// <p>A string that contains user-defined, custom JSON. It can be used to override the corresponding default stack configuration attribute values or to pass data to recipes. The string should be in the following format:</p>
/// <p>
/// <code>"{\"key1\": \"value1\", \"key2\": \"value2\",...}"</code>
/// </p>
/// <p>For more information about custom JSON, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html">Use Custom JSON to
/// Modify the Stack Configuration Attributes</a>.</p>
pub fn set_custom_json(mut self, input: std::option::Option<std::string::String>) -> Self {
self.custom_json = input;
self
}
/// <p>The configuration manager. When you create a stack we recommend that you use the configuration manager to specify the Chef version: 12, 11.10, or 11.4 for Linux stacks, or 12.2 for Windows stacks. The default value for Linux stacks is currently 12.</p>
pub fn configuration_manager(
mut self,
input: crate::model::StackConfigurationManager,
) -> Self {
self.configuration_manager = Some(input);
self
}
/// <p>The configuration manager. When you create a stack we recommend that you use the configuration manager to specify the Chef version: 12, 11.10, or 11.4 for Linux stacks, or 12.2 for Windows stacks. The default value for Linux stacks is currently 12.</p>
pub fn set_configuration_manager(
mut self,
input: std::option::Option<crate::model::StackConfigurationManager>,
) -> Self {
self.configuration_manager = input;
self
}
/// <p>A <code>ChefConfiguration</code> object that specifies whether to enable Berkshelf and the
/// Berkshelf version on Chef 11.10 stacks. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New Stack</a>.</p>
pub fn chef_configuration(mut self, input: crate::model::ChefConfiguration) -> Self {
self.chef_configuration = Some(input);
self
}
/// <p>A <code>ChefConfiguration</code> object that specifies whether to enable Berkshelf and the
/// Berkshelf version on Chef 11.10 stacks. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New Stack</a>.</p>
pub fn set_chef_configuration(
mut self,
input: std::option::Option<crate::model::ChefConfiguration>,
) -> Self {
self.chef_configuration = input;
self
}
/// <p>Whether the stack uses custom cookbooks.</p>
pub fn use_custom_cookbooks(mut self, input: bool) -> Self {
self.use_custom_cookbooks = Some(input);
self
}
/// <p>Whether the stack uses custom cookbooks.</p>
pub fn set_use_custom_cookbooks(mut self, input: std::option::Option<bool>) -> Self {
self.use_custom_cookbooks = input;
self
}
/// <p>Whether to associate the AWS OpsWorks Stacks built-in security groups with the stack's layers.</p>
/// <p>AWS OpsWorks Stacks provides a standard set of built-in security groups, one for each layer, which are
/// associated with layers by default. With <code>UseOpsworksSecurityGroups</code> you can instead
/// provide your own custom security groups. <code>UseOpsworksSecurityGroups</code> has the
/// following settings: </p>
/// <ul>
/// <li>
/// <p>True - AWS OpsWorks Stacks automatically associates the appropriate built-in security group with each layer (default setting). You can associate additional security groups with a layer after you create it, but you cannot delete the built-in security group.</p>
/// </li>
/// <li>
/// <p>False - AWS OpsWorks Stacks does not associate built-in security groups with layers. You must create appropriate EC2 security groups and associate a security group with each layer that you create. However, you can still manually associate a built-in security group with a layer on creation; custom security groups are required only for those layers that need custom settings.</p>
/// </li>
/// </ul>
/// <p>For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New
/// Stack</a>.</p>
pub fn use_opsworks_security_groups(mut self, input: bool) -> Self {
self.use_opsworks_security_groups = Some(input);
self
}
/// <p>Whether to associate the AWS OpsWorks Stacks built-in security groups with the stack's layers.</p>
/// <p>AWS OpsWorks Stacks provides a standard set of built-in security groups, one for each layer, which are
/// associated with layers by default. With <code>UseOpsworksSecurityGroups</code> you can instead
/// provide your own custom security groups. <code>UseOpsworksSecurityGroups</code> has the
/// following settings: </p>
/// <ul>
/// <li>
/// <p>True - AWS OpsWorks Stacks automatically associates the appropriate built-in security group with each layer (default setting). You can associate additional security groups with a layer after you create it, but you cannot delete the built-in security group.</p>
/// </li>
/// <li>
/// <p>False - AWS OpsWorks Stacks does not associate built-in security groups with layers. You must create appropriate EC2 security groups and associate a security group with each layer that you create. However, you can still manually associate a built-in security group with a layer on creation; custom security groups are required only for those layers that need custom settings.</p>
/// </li>
/// </ul>
/// <p>For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New
/// Stack</a>.</p>
pub fn set_use_opsworks_security_groups(
mut self,
input: std::option::Option<bool>,
) -> Self {
self.use_opsworks_security_groups = input;
self
}
/// <p>Contains the information required to retrieve an app or cookbook from a repository. For more information,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html">Adding Apps</a> or
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook.html">Cookbooks and Recipes</a>.</p>
pub fn custom_cookbooks_source(mut self, input: crate::model::Source) -> Self {
self.custom_cookbooks_source = Some(input);
self
}
/// <p>Contains the information required to retrieve an app or cookbook from a repository. For more information,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html">Adding Apps</a> or
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook.html">Cookbooks and Recipes</a>.</p>
pub fn set_custom_cookbooks_source(
mut self,
input: std::option::Option<crate::model::Source>,
) -> Self {
self.custom_cookbooks_source = input;
self
}
/// <p>A default Amazon EC2 key pair name. The default value is none. If you specify a key pair name, AWS
/// OpsWorks installs the public key on the instance and you can use the private key with an SSH
/// client to log in to the instance. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-ssh.html"> Using SSH to
/// Communicate with an Instance</a> and <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-ssh-access.html"> Managing SSH
/// Access</a>. You can override this setting by specifying a different key pair, or no key
/// pair, when you <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-add.html">
/// create an instance</a>. </p>
pub fn default_ssh_key_name(mut self, input: impl Into<std::string::String>) -> Self {
self.default_ssh_key_name = Some(input.into());
self
}
/// <p>A default Amazon EC2 key pair name. The default value is none. If you specify a key pair name, AWS
/// OpsWorks installs the public key on the instance and you can use the private key with an SSH
/// client to log in to the instance. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-ssh.html"> Using SSH to
/// Communicate with an Instance</a> and <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-ssh-access.html"> Managing SSH
/// Access</a>. You can override this setting by specifying a different key pair, or no key
/// pair, when you <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-add.html">
/// create an instance</a>. </p>
pub fn set_default_ssh_key_name(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.default_ssh_key_name = input;
self
}
/// <p>The default root device type. This value is the default for all instances in the stack,
/// but you can override it when you create an instance. The default option is
/// <code>instance-store</code>. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device">Storage for the Root Device</a>.</p>
pub fn default_root_device_type(mut self, input: crate::model::RootDeviceType) -> Self {
self.default_root_device_type = Some(input);
self
}
/// <p>The default root device type. This value is the default for all instances in the stack,
/// but you can override it when you create an instance. The default option is
/// <code>instance-store</code>. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device">Storage for the Root Device</a>.</p>
pub fn set_default_root_device_type(
mut self,
input: std::option::Option<crate::model::RootDeviceType>,
) -> Self {
self.default_root_device_type = input;
self
}
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>Auto-update - Set this parameter to <code>LATEST</code>. AWS OpsWorks Stacks
/// automatically installs new agent versions on the stack's instances as soon as
/// they are available.</p>
/// </li>
/// <li>
/// <p>Fixed version - Set this parameter to your preferred agent version. To update the agent version, you must edit the stack configuration and specify a new version. AWS OpsWorks Stacks then automatically installs that version on the stack's instances.</p>
/// </li>
/// </ul>
/// <p>The default setting is the most recent release of the agent. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>. AgentVersion cannot be set to Chef 12.2.</p>
/// <note>
/// <p>You can also specify an agent version when you create or update an instance, which overrides the stack's default setting.</p>
/// </note>
pub fn agent_version(mut self, input: impl Into<std::string::String>) -> Self {
self.agent_version = Some(input.into());
self
}
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>Auto-update - Set this parameter to <code>LATEST</code>. AWS OpsWorks Stacks
/// automatically installs new agent versions on the stack's instances as soon as
/// they are available.</p>
/// </li>
/// <li>
/// <p>Fixed version - Set this parameter to your preferred agent version. To update the agent version, you must edit the stack configuration and specify a new version. AWS OpsWorks Stacks then automatically installs that version on the stack's instances.</p>
/// </li>
/// </ul>
/// <p>The default setting is the most recent release of the agent. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>. AgentVersion cannot be set to Chef 12.2.</p>
/// <note>
/// <p>You can also specify an agent version when you create or update an instance, which overrides the stack's default setting.</p>
/// </note>
pub fn set_agent_version(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.agent_version = input;
self
}
/// Consumes the builder and constructs a [`CreateStackInput`](crate::input::CreateStackInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::CreateStackInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::CreateStackInput {
name: self.name,
region: self.region,
vpc_id: self.vpc_id,
attributes: self.attributes,
service_role_arn: self.service_role_arn,
default_instance_profile_arn: self.default_instance_profile_arn,
default_os: self.default_os,
hostname_theme: self.hostname_theme,
default_availability_zone: self.default_availability_zone,
default_subnet_id: self.default_subnet_id,
custom_json: self.custom_json,
configuration_manager: self.configuration_manager,
chef_configuration: self.chef_configuration,
use_custom_cookbooks: self.use_custom_cookbooks,
use_opsworks_security_groups: self.use_opsworks_security_groups,
custom_cookbooks_source: self.custom_cookbooks_source,
default_ssh_key_name: self.default_ssh_key_name,
default_root_device_type: self.default_root_device_type,
agent_version: self.agent_version,
})
}
}
}
#[doc(hidden)]
pub type CreateStackInputOperationOutputAlias = crate::operation::CreateStack;
#[doc(hidden)]
pub type CreateStackInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl CreateStackInput {
/// Consumes the builder and constructs an Operation<[`CreateStack`](crate::operation::CreateStack)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::CreateStack,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::CreateStackInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::CreateStackInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::CreateStackInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.CreateStack",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_create_stack(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::CreateStack::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"CreateStack",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`CreateStackInput`](crate::input::CreateStackInput)
pub fn builder() -> crate::input::create_stack_input::Builder {
crate::input::create_stack_input::Builder::default()
}
}
/// See [`CreateUserProfileInput`](crate::input::CreateUserProfileInput)
pub mod create_user_profile_input {
/// A builder for [`CreateUserProfileInput`](crate::input::CreateUserProfileInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) iam_user_arn: std::option::Option<std::string::String>,
pub(crate) ssh_username: std::option::Option<std::string::String>,
pub(crate) ssh_public_key: std::option::Option<std::string::String>,
pub(crate) allow_self_management: std::option::Option<bool>,
}
impl Builder {
/// <p>The user's IAM ARN; this can also be a federated user's ARN.</p>
pub fn iam_user_arn(mut self, input: impl Into<std::string::String>) -> Self {
self.iam_user_arn = Some(input.into());
self
}
/// <p>The user's IAM ARN; this can also be a federated user's ARN.</p>
pub fn set_iam_user_arn(mut self, input: std::option::Option<std::string::String>) -> Self {
self.iam_user_arn = input;
self
}
/// <p>The user's SSH user name. The allowable characters are [a-z], [A-Z], [0-9], '-', and '_'. If
/// the specified name includes other punctuation marks, AWS OpsWorks Stacks removes them. For example,
/// <code>my.name</code> will be changed to <code>myname</code>. If you do not specify an SSH
/// user name, AWS OpsWorks Stacks generates one from the IAM user name. </p>
pub fn ssh_username(mut self, input: impl Into<std::string::String>) -> Self {
self.ssh_username = Some(input.into());
self
}
/// <p>The user's SSH user name. The allowable characters are [a-z], [A-Z], [0-9], '-', and '_'. If
/// the specified name includes other punctuation marks, AWS OpsWorks Stacks removes them. For example,
/// <code>my.name</code> will be changed to <code>myname</code>. If you do not specify an SSH
/// user name, AWS OpsWorks Stacks generates one from the IAM user name. </p>
pub fn set_ssh_username(mut self, input: std::option::Option<std::string::String>) -> Self {
self.ssh_username = input;
self
}
/// <p>The user's public SSH key.</p>
pub fn ssh_public_key(mut self, input: impl Into<std::string::String>) -> Self {
self.ssh_public_key = Some(input.into());
self
}
/// <p>The user's public SSH key.</p>
pub fn set_ssh_public_key(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.ssh_public_key = input;
self
}
/// <p>Whether users can specify their own SSH public key through the My Settings page. For more
/// information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-settingsshkey.html">Setting an IAM
/// User's Public SSH Key</a>.</p>
pub fn allow_self_management(mut self, input: bool) -> Self {
self.allow_self_management = Some(input);
self
}
/// <p>Whether users can specify their own SSH public key through the My Settings page. For more
/// information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-settingsshkey.html">Setting an IAM
/// User's Public SSH Key</a>.</p>
pub fn set_allow_self_management(mut self, input: std::option::Option<bool>) -> Self {
self.allow_self_management = input;
self
}
/// Consumes the builder and constructs a [`CreateUserProfileInput`](crate::input::CreateUserProfileInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::CreateUserProfileInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::CreateUserProfileInput {
iam_user_arn: self.iam_user_arn,
ssh_username: self.ssh_username,
ssh_public_key: self.ssh_public_key,
allow_self_management: self.allow_self_management,
})
}
}
}
#[doc(hidden)]
pub type CreateUserProfileInputOperationOutputAlias = crate::operation::CreateUserProfile;
#[doc(hidden)]
pub type CreateUserProfileInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl CreateUserProfileInput {
/// Consumes the builder and constructs an Operation<[`CreateUserProfile`](crate::operation::CreateUserProfile)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::CreateUserProfile,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::CreateUserProfileInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::CreateUserProfileInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::CreateUserProfileInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.CreateUserProfile",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_create_user_profile(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::CreateUserProfile::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"CreateUserProfile",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`CreateUserProfileInput`](crate::input::CreateUserProfileInput)
pub fn builder() -> crate::input::create_user_profile_input::Builder {
crate::input::create_user_profile_input::Builder::default()
}
}
/// See [`DeleteAppInput`](crate::input::DeleteAppInput)
pub mod delete_app_input {
/// A builder for [`DeleteAppInput`](crate::input::DeleteAppInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) app_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The app ID.</p>
pub fn app_id(mut self, input: impl Into<std::string::String>) -> Self {
self.app_id = Some(input.into());
self
}
/// <p>The app ID.</p>
pub fn set_app_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.app_id = input;
self
}
/// Consumes the builder and constructs a [`DeleteAppInput`](crate::input::DeleteAppInput)
pub fn build(
self,
) -> std::result::Result<crate::input::DeleteAppInput, aws_smithy_http::operation::BuildError>
{
Ok(crate::input::DeleteAppInput {
app_id: self.app_id,
})
}
}
}
#[doc(hidden)]
pub type DeleteAppInputOperationOutputAlias = crate::operation::DeleteApp;
#[doc(hidden)]
pub type DeleteAppInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DeleteAppInput {
/// Consumes the builder and constructs an Operation<[`DeleteApp`](crate::operation::DeleteApp)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DeleteApp,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DeleteAppInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DeleteAppInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DeleteAppInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DeleteApp",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_delete_app(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op =
aws_smithy_http::operation::Operation::new(request, crate::operation::DeleteApp::new())
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DeleteApp",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DeleteAppInput`](crate::input::DeleteAppInput)
pub fn builder() -> crate::input::delete_app_input::Builder {
crate::input::delete_app_input::Builder::default()
}
}
/// See [`DeleteInstanceInput`](crate::input::DeleteInstanceInput)
pub mod delete_instance_input {
/// A builder for [`DeleteInstanceInput`](crate::input::DeleteInstanceInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) instance_id: std::option::Option<std::string::String>,
pub(crate) delete_elastic_ip: std::option::Option<bool>,
pub(crate) delete_volumes: std::option::Option<bool>,
}
impl Builder {
/// <p>The instance ID.</p>
pub fn instance_id(mut self, input: impl Into<std::string::String>) -> Self {
self.instance_id = Some(input.into());
self
}
/// <p>The instance ID.</p>
pub fn set_instance_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.instance_id = input;
self
}
/// <p>Whether to delete the instance Elastic IP address.</p>
pub fn delete_elastic_ip(mut self, input: bool) -> Self {
self.delete_elastic_ip = Some(input);
self
}
/// <p>Whether to delete the instance Elastic IP address.</p>
pub fn set_delete_elastic_ip(mut self, input: std::option::Option<bool>) -> Self {
self.delete_elastic_ip = input;
self
}
/// <p>Whether to delete the instance's Amazon EBS volumes.</p>
pub fn delete_volumes(mut self, input: bool) -> Self {
self.delete_volumes = Some(input);
self
}
/// <p>Whether to delete the instance's Amazon EBS volumes.</p>
pub fn set_delete_volumes(mut self, input: std::option::Option<bool>) -> Self {
self.delete_volumes = input;
self
}
/// Consumes the builder and constructs a [`DeleteInstanceInput`](crate::input::DeleteInstanceInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DeleteInstanceInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DeleteInstanceInput {
instance_id: self.instance_id,
delete_elastic_ip: self.delete_elastic_ip,
delete_volumes: self.delete_volumes,
})
}
}
}
#[doc(hidden)]
pub type DeleteInstanceInputOperationOutputAlias = crate::operation::DeleteInstance;
#[doc(hidden)]
pub type DeleteInstanceInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DeleteInstanceInput {
/// Consumes the builder and constructs an Operation<[`DeleteInstance`](crate::operation::DeleteInstance)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DeleteInstance,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DeleteInstanceInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DeleteInstanceInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DeleteInstanceInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DeleteInstance",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_delete_instance(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DeleteInstance::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DeleteInstance",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DeleteInstanceInput`](crate::input::DeleteInstanceInput)
pub fn builder() -> crate::input::delete_instance_input::Builder {
crate::input::delete_instance_input::Builder::default()
}
}
/// See [`DeleteLayerInput`](crate::input::DeleteLayerInput)
pub mod delete_layer_input {
/// A builder for [`DeleteLayerInput`](crate::input::DeleteLayerInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) layer_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The layer ID.</p>
pub fn layer_id(mut self, input: impl Into<std::string::String>) -> Self {
self.layer_id = Some(input.into());
self
}
/// <p>The layer ID.</p>
pub fn set_layer_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.layer_id = input;
self
}
/// Consumes the builder and constructs a [`DeleteLayerInput`](crate::input::DeleteLayerInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DeleteLayerInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DeleteLayerInput {
layer_id: self.layer_id,
})
}
}
}
#[doc(hidden)]
pub type DeleteLayerInputOperationOutputAlias = crate::operation::DeleteLayer;
#[doc(hidden)]
pub type DeleteLayerInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DeleteLayerInput {
/// Consumes the builder and constructs an Operation<[`DeleteLayer`](crate::operation::DeleteLayer)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DeleteLayer,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DeleteLayerInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DeleteLayerInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DeleteLayerInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DeleteLayer",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_delete_layer(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DeleteLayer::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DeleteLayer",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DeleteLayerInput`](crate::input::DeleteLayerInput)
pub fn builder() -> crate::input::delete_layer_input::Builder {
crate::input::delete_layer_input::Builder::default()
}
}
/// See [`DeleteStackInput`](crate::input::DeleteStackInput)
pub mod delete_stack_input {
/// A builder for [`DeleteStackInput`](crate::input::DeleteStackInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The stack ID.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// Consumes the builder and constructs a [`DeleteStackInput`](crate::input::DeleteStackInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DeleteStackInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DeleteStackInput {
stack_id: self.stack_id,
})
}
}
}
#[doc(hidden)]
pub type DeleteStackInputOperationOutputAlias = crate::operation::DeleteStack;
#[doc(hidden)]
pub type DeleteStackInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DeleteStackInput {
/// Consumes the builder and constructs an Operation<[`DeleteStack`](crate::operation::DeleteStack)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DeleteStack,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DeleteStackInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DeleteStackInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DeleteStackInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DeleteStack",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_delete_stack(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DeleteStack::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DeleteStack",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DeleteStackInput`](crate::input::DeleteStackInput)
pub fn builder() -> crate::input::delete_stack_input::Builder {
crate::input::delete_stack_input::Builder::default()
}
}
/// See [`DeleteUserProfileInput`](crate::input::DeleteUserProfileInput)
pub mod delete_user_profile_input {
/// A builder for [`DeleteUserProfileInput`](crate::input::DeleteUserProfileInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) iam_user_arn: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The user's IAM ARN. This can also be a federated user's ARN.</p>
pub fn iam_user_arn(mut self, input: impl Into<std::string::String>) -> Self {
self.iam_user_arn = Some(input.into());
self
}
/// <p>The user's IAM ARN. This can also be a federated user's ARN.</p>
pub fn set_iam_user_arn(mut self, input: std::option::Option<std::string::String>) -> Self {
self.iam_user_arn = input;
self
}
/// Consumes the builder and constructs a [`DeleteUserProfileInput`](crate::input::DeleteUserProfileInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DeleteUserProfileInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DeleteUserProfileInput {
iam_user_arn: self.iam_user_arn,
})
}
}
}
#[doc(hidden)]
pub type DeleteUserProfileInputOperationOutputAlias = crate::operation::DeleteUserProfile;
#[doc(hidden)]
pub type DeleteUserProfileInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DeleteUserProfileInput {
/// Consumes the builder and constructs an Operation<[`DeleteUserProfile`](crate::operation::DeleteUserProfile)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DeleteUserProfile,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DeleteUserProfileInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DeleteUserProfileInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DeleteUserProfileInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DeleteUserProfile",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_delete_user_profile(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DeleteUserProfile::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DeleteUserProfile",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DeleteUserProfileInput`](crate::input::DeleteUserProfileInput)
pub fn builder() -> crate::input::delete_user_profile_input::Builder {
crate::input::delete_user_profile_input::Builder::default()
}
}
/// See [`DeregisterEcsClusterInput`](crate::input::DeregisterEcsClusterInput)
pub mod deregister_ecs_cluster_input {
/// A builder for [`DeregisterEcsClusterInput`](crate::input::DeregisterEcsClusterInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) ecs_cluster_arn: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The cluster's Amazon Resource Number (ARN).</p>
pub fn ecs_cluster_arn(mut self, input: impl Into<std::string::String>) -> Self {
self.ecs_cluster_arn = Some(input.into());
self
}
/// <p>The cluster's Amazon Resource Number (ARN).</p>
pub fn set_ecs_cluster_arn(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.ecs_cluster_arn = input;
self
}
/// Consumes the builder and constructs a [`DeregisterEcsClusterInput`](crate::input::DeregisterEcsClusterInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DeregisterEcsClusterInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DeregisterEcsClusterInput {
ecs_cluster_arn: self.ecs_cluster_arn,
})
}
}
}
#[doc(hidden)]
pub type DeregisterEcsClusterInputOperationOutputAlias = crate::operation::DeregisterEcsCluster;
#[doc(hidden)]
pub type DeregisterEcsClusterInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DeregisterEcsClusterInput {
/// Consumes the builder and constructs an Operation<[`DeregisterEcsCluster`](crate::operation::DeregisterEcsCluster)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DeregisterEcsCluster,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DeregisterEcsClusterInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DeregisterEcsClusterInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DeregisterEcsClusterInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DeregisterEcsCluster",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_deregister_ecs_cluster(
&self,
)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DeregisterEcsCluster::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DeregisterEcsCluster",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DeregisterEcsClusterInput`](crate::input::DeregisterEcsClusterInput)
pub fn builder() -> crate::input::deregister_ecs_cluster_input::Builder {
crate::input::deregister_ecs_cluster_input::Builder::default()
}
}
/// See [`DeregisterElasticIpInput`](crate::input::DeregisterElasticIpInput)
pub mod deregister_elastic_ip_input {
/// A builder for [`DeregisterElasticIpInput`](crate::input::DeregisterElasticIpInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) elastic_ip: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The Elastic IP address.</p>
pub fn elastic_ip(mut self, input: impl Into<std::string::String>) -> Self {
self.elastic_ip = Some(input.into());
self
}
/// <p>The Elastic IP address.</p>
pub fn set_elastic_ip(mut self, input: std::option::Option<std::string::String>) -> Self {
self.elastic_ip = input;
self
}
/// Consumes the builder and constructs a [`DeregisterElasticIpInput`](crate::input::DeregisterElasticIpInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DeregisterElasticIpInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DeregisterElasticIpInput {
elastic_ip: self.elastic_ip,
})
}
}
}
#[doc(hidden)]
pub type DeregisterElasticIpInputOperationOutputAlias = crate::operation::DeregisterElasticIp;
#[doc(hidden)]
pub type DeregisterElasticIpInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DeregisterElasticIpInput {
/// Consumes the builder and constructs an Operation<[`DeregisterElasticIp`](crate::operation::DeregisterElasticIp)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DeregisterElasticIp,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DeregisterElasticIpInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DeregisterElasticIpInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DeregisterElasticIpInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DeregisterElasticIp",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_deregister_elastic_ip(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DeregisterElasticIp::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DeregisterElasticIp",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DeregisterElasticIpInput`](crate::input::DeregisterElasticIpInput)
pub fn builder() -> crate::input::deregister_elastic_ip_input::Builder {
crate::input::deregister_elastic_ip_input::Builder::default()
}
}
/// See [`DeregisterInstanceInput`](crate::input::DeregisterInstanceInput)
pub mod deregister_instance_input {
/// A builder for [`DeregisterInstanceInput`](crate::input::DeregisterInstanceInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) instance_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The instance ID.</p>
pub fn instance_id(mut self, input: impl Into<std::string::String>) -> Self {
self.instance_id = Some(input.into());
self
}
/// <p>The instance ID.</p>
pub fn set_instance_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.instance_id = input;
self
}
/// Consumes the builder and constructs a [`DeregisterInstanceInput`](crate::input::DeregisterInstanceInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DeregisterInstanceInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DeregisterInstanceInput {
instance_id: self.instance_id,
})
}
}
}
#[doc(hidden)]
pub type DeregisterInstanceInputOperationOutputAlias = crate::operation::DeregisterInstance;
#[doc(hidden)]
pub type DeregisterInstanceInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DeregisterInstanceInput {
/// Consumes the builder and constructs an Operation<[`DeregisterInstance`](crate::operation::DeregisterInstance)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DeregisterInstance,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DeregisterInstanceInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DeregisterInstanceInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DeregisterInstanceInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DeregisterInstance",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_deregister_instance(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DeregisterInstance::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DeregisterInstance",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DeregisterInstanceInput`](crate::input::DeregisterInstanceInput)
pub fn builder() -> crate::input::deregister_instance_input::Builder {
crate::input::deregister_instance_input::Builder::default()
}
}
/// See [`DeregisterRdsDbInstanceInput`](crate::input::DeregisterRdsDbInstanceInput)
pub mod deregister_rds_db_instance_input {
/// A builder for [`DeregisterRdsDbInstanceInput`](crate::input::DeregisterRdsDbInstanceInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) rds_db_instance_arn: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The Amazon RDS instance's ARN.</p>
pub fn rds_db_instance_arn(mut self, input: impl Into<std::string::String>) -> Self {
self.rds_db_instance_arn = Some(input.into());
self
}
/// <p>The Amazon RDS instance's ARN.</p>
pub fn set_rds_db_instance_arn(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.rds_db_instance_arn = input;
self
}
/// Consumes the builder and constructs a [`DeregisterRdsDbInstanceInput`](crate::input::DeregisterRdsDbInstanceInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DeregisterRdsDbInstanceInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DeregisterRdsDbInstanceInput {
rds_db_instance_arn: self.rds_db_instance_arn,
})
}
}
}
#[doc(hidden)]
pub type DeregisterRdsDbInstanceInputOperationOutputAlias =
crate::operation::DeregisterRdsDbInstance;
#[doc(hidden)]
pub type DeregisterRdsDbInstanceInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DeregisterRdsDbInstanceInput {
/// Consumes the builder and constructs an Operation<[`DeregisterRdsDbInstance`](crate::operation::DeregisterRdsDbInstance)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DeregisterRdsDbInstance,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DeregisterRdsDbInstanceInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DeregisterRdsDbInstanceInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DeregisterRdsDbInstanceInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DeregisterRdsDbInstance",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_deregister_rds_db_instance(
&self,
)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DeregisterRdsDbInstance::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DeregisterRdsDbInstance",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DeregisterRdsDbInstanceInput`](crate::input::DeregisterRdsDbInstanceInput)
pub fn builder() -> crate::input::deregister_rds_db_instance_input::Builder {
crate::input::deregister_rds_db_instance_input::Builder::default()
}
}
/// See [`DeregisterVolumeInput`](crate::input::DeregisterVolumeInput)
pub mod deregister_volume_input {
/// A builder for [`DeregisterVolumeInput`](crate::input::DeregisterVolumeInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) volume_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The AWS OpsWorks Stacks volume ID, which is the GUID that AWS OpsWorks Stacks assigned to the instance when you registered the volume with the stack, not the Amazon EC2 volume ID.</p>
pub fn volume_id(mut self, input: impl Into<std::string::String>) -> Self {
self.volume_id = Some(input.into());
self
}
/// <p>The AWS OpsWorks Stacks volume ID, which is the GUID that AWS OpsWorks Stacks assigned to the instance when you registered the volume with the stack, not the Amazon EC2 volume ID.</p>
pub fn set_volume_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.volume_id = input;
self
}
/// Consumes the builder and constructs a [`DeregisterVolumeInput`](crate::input::DeregisterVolumeInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DeregisterVolumeInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DeregisterVolumeInput {
volume_id: self.volume_id,
})
}
}
}
#[doc(hidden)]
pub type DeregisterVolumeInputOperationOutputAlias = crate::operation::DeregisterVolume;
#[doc(hidden)]
pub type DeregisterVolumeInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DeregisterVolumeInput {
/// Consumes the builder and constructs an Operation<[`DeregisterVolume`](crate::operation::DeregisterVolume)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DeregisterVolume,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DeregisterVolumeInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DeregisterVolumeInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DeregisterVolumeInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DeregisterVolume",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_deregister_volume(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DeregisterVolume::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DeregisterVolume",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DeregisterVolumeInput`](crate::input::DeregisterVolumeInput)
pub fn builder() -> crate::input::deregister_volume_input::Builder {
crate::input::deregister_volume_input::Builder::default()
}
}
/// See [`DescribeAgentVersionsInput`](crate::input::DescribeAgentVersionsInput)
pub mod describe_agent_versions_input {
/// A builder for [`DescribeAgentVersionsInput`](crate::input::DescribeAgentVersionsInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) configuration_manager:
std::option::Option<crate::model::StackConfigurationManager>,
}
impl Builder {
/// <p>The stack ID.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// <p>The configuration manager.</p>
pub fn configuration_manager(
mut self,
input: crate::model::StackConfigurationManager,
) -> Self {
self.configuration_manager = Some(input);
self
}
/// <p>The configuration manager.</p>
pub fn set_configuration_manager(
mut self,
input: std::option::Option<crate::model::StackConfigurationManager>,
) -> Self {
self.configuration_manager = input;
self
}
/// Consumes the builder and constructs a [`DescribeAgentVersionsInput`](crate::input::DescribeAgentVersionsInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeAgentVersionsInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeAgentVersionsInput {
stack_id: self.stack_id,
configuration_manager: self.configuration_manager,
})
}
}
}
#[doc(hidden)]
pub type DescribeAgentVersionsInputOperationOutputAlias = crate::operation::DescribeAgentVersions;
#[doc(hidden)]
pub type DescribeAgentVersionsInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeAgentVersionsInput {
/// Consumes the builder and constructs an Operation<[`DescribeAgentVersions`](crate::operation::DescribeAgentVersions)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeAgentVersions,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeAgentVersionsInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeAgentVersionsInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeAgentVersionsInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeAgentVersions",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_agent_versions(
&self,
)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeAgentVersions::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeAgentVersions",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeAgentVersionsInput`](crate::input::DescribeAgentVersionsInput)
pub fn builder() -> crate::input::describe_agent_versions_input::Builder {
crate::input::describe_agent_versions_input::Builder::default()
}
}
/// See [`DescribeAppsInput`](crate::input::DescribeAppsInput)
pub mod describe_apps_input {
/// A builder for [`DescribeAppsInput`](crate::input::DescribeAppsInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) app_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl Builder {
/// <p>The app stack ID. If you use this parameter, <code>DescribeApps</code> returns a description
/// of the apps in the specified stack.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The app stack ID. If you use this parameter, <code>DescribeApps</code> returns a description
/// of the apps in the specified stack.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// Appends an item to `app_ids`.
///
/// To override the contents of this collection use [`set_app_ids`](Self::set_app_ids).
///
/// <p>An array of app IDs for the apps to be described. If you use this parameter,
/// <code>DescribeApps</code> returns a description of the specified apps. Otherwise, it returns
/// a description of every app.</p>
pub fn app_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.app_ids.unwrap_or_default();
v.push(input.into());
self.app_ids = Some(v);
self
}
/// <p>An array of app IDs for the apps to be described. If you use this parameter,
/// <code>DescribeApps</code> returns a description of the specified apps. Otherwise, it returns
/// a description of every app.</p>
pub fn set_app_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.app_ids = input;
self
}
/// Consumes the builder and constructs a [`DescribeAppsInput`](crate::input::DescribeAppsInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeAppsInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeAppsInput {
stack_id: self.stack_id,
app_ids: self.app_ids,
})
}
}
}
#[doc(hidden)]
pub type DescribeAppsInputOperationOutputAlias = crate::operation::DescribeApps;
#[doc(hidden)]
pub type DescribeAppsInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeAppsInput {
/// Consumes the builder and constructs an Operation<[`DescribeApps`](crate::operation::DescribeApps)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeApps,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeAppsInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeAppsInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeAppsInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeApps",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_describe_apps(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeApps::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeApps",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeAppsInput`](crate::input::DescribeAppsInput)
pub fn builder() -> crate::input::describe_apps_input::Builder {
crate::input::describe_apps_input::Builder::default()
}
}
/// See [`DescribeCommandsInput`](crate::input::DescribeCommandsInput)
pub mod describe_commands_input {
/// A builder for [`DescribeCommandsInput`](crate::input::DescribeCommandsInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) deployment_id: std::option::Option<std::string::String>,
pub(crate) instance_id: std::option::Option<std::string::String>,
pub(crate) command_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl Builder {
/// <p>The deployment ID. If you include this parameter, <code>DescribeCommands</code> returns a
/// description of the commands associated with the specified deployment.</p>
pub fn deployment_id(mut self, input: impl Into<std::string::String>) -> Self {
self.deployment_id = Some(input.into());
self
}
/// <p>The deployment ID. If you include this parameter, <code>DescribeCommands</code> returns a
/// description of the commands associated with the specified deployment.</p>
pub fn set_deployment_id(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.deployment_id = input;
self
}
/// <p>The instance ID. If you include this parameter, <code>DescribeCommands</code> returns a
/// description of the commands associated with the specified instance.</p>
pub fn instance_id(mut self, input: impl Into<std::string::String>) -> Self {
self.instance_id = Some(input.into());
self
}
/// <p>The instance ID. If you include this parameter, <code>DescribeCommands</code> returns a
/// description of the commands associated with the specified instance.</p>
pub fn set_instance_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.instance_id = input;
self
}
/// Appends an item to `command_ids`.
///
/// To override the contents of this collection use [`set_command_ids`](Self::set_command_ids).
///
/// <p>An array of command IDs. If you include this parameter, <code>DescribeCommands</code> returns
/// a description of the specified commands. Otherwise, it returns a description of every
/// command.</p>
pub fn command_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.command_ids.unwrap_or_default();
v.push(input.into());
self.command_ids = Some(v);
self
}
/// <p>An array of command IDs. If you include this parameter, <code>DescribeCommands</code> returns
/// a description of the specified commands. Otherwise, it returns a description of every
/// command.</p>
pub fn set_command_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.command_ids = input;
self
}
/// Consumes the builder and constructs a [`DescribeCommandsInput`](crate::input::DescribeCommandsInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeCommandsInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeCommandsInput {
deployment_id: self.deployment_id,
instance_id: self.instance_id,
command_ids: self.command_ids,
})
}
}
}
#[doc(hidden)]
pub type DescribeCommandsInputOperationOutputAlias = crate::operation::DescribeCommands;
#[doc(hidden)]
pub type DescribeCommandsInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeCommandsInput {
/// Consumes the builder and constructs an Operation<[`DescribeCommands`](crate::operation::DescribeCommands)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeCommands,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeCommandsInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeCommandsInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeCommandsInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeCommands",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_commands(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeCommands::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeCommands",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeCommandsInput`](crate::input::DescribeCommandsInput)
pub fn builder() -> crate::input::describe_commands_input::Builder {
crate::input::describe_commands_input::Builder::default()
}
}
/// See [`DescribeDeploymentsInput`](crate::input::DescribeDeploymentsInput)
pub mod describe_deployments_input {
/// A builder for [`DescribeDeploymentsInput`](crate::input::DescribeDeploymentsInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) app_id: std::option::Option<std::string::String>,
pub(crate) deployment_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl Builder {
/// <p>The stack ID. If you include this parameter, the command returns a
/// description of the commands associated with the specified stack.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID. If you include this parameter, the command returns a
/// description of the commands associated with the specified stack.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// <p>The app ID. If you include this parameter, the command returns a
/// description of the commands associated with the specified app.</p>
pub fn app_id(mut self, input: impl Into<std::string::String>) -> Self {
self.app_id = Some(input.into());
self
}
/// <p>The app ID. If you include this parameter, the command returns a
/// description of the commands associated with the specified app.</p>
pub fn set_app_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.app_id = input;
self
}
/// Appends an item to `deployment_ids`.
///
/// To override the contents of this collection use [`set_deployment_ids`](Self::set_deployment_ids).
///
/// <p>An array of deployment IDs to be described. If you include this parameter,
/// the command returns a description of the specified deployments.
/// Otherwise, it returns a description of every deployment.</p>
pub fn deployment_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.deployment_ids.unwrap_or_default();
v.push(input.into());
self.deployment_ids = Some(v);
self
}
/// <p>An array of deployment IDs to be described. If you include this parameter,
/// the command returns a description of the specified deployments.
/// Otherwise, it returns a description of every deployment.</p>
pub fn set_deployment_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.deployment_ids = input;
self
}
/// Consumes the builder and constructs a [`DescribeDeploymentsInput`](crate::input::DescribeDeploymentsInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeDeploymentsInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeDeploymentsInput {
stack_id: self.stack_id,
app_id: self.app_id,
deployment_ids: self.deployment_ids,
})
}
}
}
#[doc(hidden)]
pub type DescribeDeploymentsInputOperationOutputAlias = crate::operation::DescribeDeployments;
#[doc(hidden)]
pub type DescribeDeploymentsInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeDeploymentsInput {
/// Consumes the builder and constructs an Operation<[`DescribeDeployments`](crate::operation::DescribeDeployments)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeDeployments,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeDeploymentsInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeDeploymentsInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeDeploymentsInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeDeployments",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_deployments(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeDeployments::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeDeployments",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeDeploymentsInput`](crate::input::DescribeDeploymentsInput)
pub fn builder() -> crate::input::describe_deployments_input::Builder {
crate::input::describe_deployments_input::Builder::default()
}
}
/// See [`DescribeEcsClustersInput`](crate::input::DescribeEcsClustersInput)
pub mod describe_ecs_clusters_input {
/// A builder for [`DescribeEcsClustersInput`](crate::input::DescribeEcsClustersInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) ecs_cluster_arns: std::option::Option<std::vec::Vec<std::string::String>>,
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) next_token: std::option::Option<std::string::String>,
pub(crate) max_results: std::option::Option<i32>,
}
impl Builder {
/// Appends an item to `ecs_cluster_arns`.
///
/// To override the contents of this collection use [`set_ecs_cluster_arns`](Self::set_ecs_cluster_arns).
///
/// <p>A list of ARNs, one for each cluster to be described.</p>
pub fn ecs_cluster_arns(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.ecs_cluster_arns.unwrap_or_default();
v.push(input.into());
self.ecs_cluster_arns = Some(v);
self
}
/// <p>A list of ARNs, one for each cluster to be described.</p>
pub fn set_ecs_cluster_arns(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.ecs_cluster_arns = input;
self
}
/// <p>A stack ID.
/// <code>DescribeEcsClusters</code> returns a description of the cluster that is registered with the stack.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>A stack ID.
/// <code>DescribeEcsClusters</code> returns a description of the cluster that is registered with the stack.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// <p>If the previous paginated request did not return all of the remaining results,
/// the response object's<code>NextToken</code> parameter value is set to a token.
/// To retrieve the next set of results, call <code>DescribeEcsClusters</code>
/// again and assign that token to the request object's <code>NextToken</code> parameter.
/// If there are no remaining results, the previous response
/// object's <code>NextToken</code> parameter is set to <code>null</code>.</p>
pub fn next_token(mut self, input: impl Into<std::string::String>) -> Self {
self.next_token = Some(input.into());
self
}
/// <p>If the previous paginated request did not return all of the remaining results,
/// the response object's<code>NextToken</code> parameter value is set to a token.
/// To retrieve the next set of results, call <code>DescribeEcsClusters</code>
/// again and assign that token to the request object's <code>NextToken</code> parameter.
/// If there are no remaining results, the previous response
/// object's <code>NextToken</code> parameter is set to <code>null</code>.</p>
pub fn set_next_token(mut self, input: std::option::Option<std::string::String>) -> Self {
self.next_token = input;
self
}
/// <p>To receive a paginated response, use this parameter to specify the maximum number
/// of results to be returned with a single call. If the number of available results exceeds this maximum, the
/// response includes a <code>NextToken</code> value that you can assign
/// to the <code>NextToken</code> request parameter to get the next set of results.</p>
pub fn max_results(mut self, input: i32) -> Self {
self.max_results = Some(input);
self
}
/// <p>To receive a paginated response, use this parameter to specify the maximum number
/// of results to be returned with a single call. If the number of available results exceeds this maximum, the
/// response includes a <code>NextToken</code> value that you can assign
/// to the <code>NextToken</code> request parameter to get the next set of results.</p>
pub fn set_max_results(mut self, input: std::option::Option<i32>) -> Self {
self.max_results = input;
self
}
/// Consumes the builder and constructs a [`DescribeEcsClustersInput`](crate::input::DescribeEcsClustersInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeEcsClustersInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeEcsClustersInput {
ecs_cluster_arns: self.ecs_cluster_arns,
stack_id: self.stack_id,
next_token: self.next_token,
max_results: self.max_results,
})
}
}
}
#[doc(hidden)]
pub type DescribeEcsClustersInputOperationOutputAlias = crate::operation::DescribeEcsClusters;
#[doc(hidden)]
pub type DescribeEcsClustersInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeEcsClustersInput {
/// Consumes the builder and constructs an Operation<[`DescribeEcsClusters`](crate::operation::DescribeEcsClusters)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeEcsClusters,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeEcsClustersInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeEcsClustersInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeEcsClustersInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeEcsClusters",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_ecs_clusters(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeEcsClusters::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeEcsClusters",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeEcsClustersInput`](crate::input::DescribeEcsClustersInput)
pub fn builder() -> crate::input::describe_ecs_clusters_input::Builder {
crate::input::describe_ecs_clusters_input::Builder::default()
}
}
/// See [`DescribeElasticIpsInput`](crate::input::DescribeElasticIpsInput)
pub mod describe_elastic_ips_input {
/// A builder for [`DescribeElasticIpsInput`](crate::input::DescribeElasticIpsInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) instance_id: std::option::Option<std::string::String>,
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) ips: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl Builder {
/// <p>The instance ID. If you include this parameter, <code>DescribeElasticIps</code> returns a
/// description of the Elastic IP addresses associated with the specified instance.</p>
pub fn instance_id(mut self, input: impl Into<std::string::String>) -> Self {
self.instance_id = Some(input.into());
self
}
/// <p>The instance ID. If you include this parameter, <code>DescribeElasticIps</code> returns a
/// description of the Elastic IP addresses associated with the specified instance.</p>
pub fn set_instance_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.instance_id = input;
self
}
/// <p>A stack ID. If you include this parameter, <code>DescribeElasticIps</code> returns a
/// description of the Elastic IP addresses that are registered with the specified stack.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>A stack ID. If you include this parameter, <code>DescribeElasticIps</code> returns a
/// description of the Elastic IP addresses that are registered with the specified stack.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// Appends an item to `ips`.
///
/// To override the contents of this collection use [`set_ips`](Self::set_ips).
///
/// <p>An array of Elastic IP addresses to be described. If you include this parameter,
/// <code>DescribeElasticIps</code> returns a description of the specified Elastic IP addresses.
/// Otherwise, it returns a description of every Elastic IP address.</p>
pub fn ips(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.ips.unwrap_or_default();
v.push(input.into());
self.ips = Some(v);
self
}
/// <p>An array of Elastic IP addresses to be described. If you include this parameter,
/// <code>DescribeElasticIps</code> returns a description of the specified Elastic IP addresses.
/// Otherwise, it returns a description of every Elastic IP address.</p>
pub fn set_ips(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.ips = input;
self
}
/// Consumes the builder and constructs a [`DescribeElasticIpsInput`](crate::input::DescribeElasticIpsInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeElasticIpsInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeElasticIpsInput {
instance_id: self.instance_id,
stack_id: self.stack_id,
ips: self.ips,
})
}
}
}
#[doc(hidden)]
pub type DescribeElasticIpsInputOperationOutputAlias = crate::operation::DescribeElasticIps;
#[doc(hidden)]
pub type DescribeElasticIpsInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeElasticIpsInput {
/// Consumes the builder and constructs an Operation<[`DescribeElasticIps`](crate::operation::DescribeElasticIps)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeElasticIps,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeElasticIpsInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeElasticIpsInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeElasticIpsInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeElasticIps",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_elastic_ips(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeElasticIps::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeElasticIps",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeElasticIpsInput`](crate::input::DescribeElasticIpsInput)
pub fn builder() -> crate::input::describe_elastic_ips_input::Builder {
crate::input::describe_elastic_ips_input::Builder::default()
}
}
/// See [`DescribeElasticLoadBalancersInput`](crate::input::DescribeElasticLoadBalancersInput)
pub mod describe_elastic_load_balancers_input {
/// A builder for [`DescribeElasticLoadBalancersInput`](crate::input::DescribeElasticLoadBalancersInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) layer_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl Builder {
/// <p>A stack ID. The action describes the stack's Elastic Load Balancing instances.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>A stack ID. The action describes the stack's Elastic Load Balancing instances.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// Appends an item to `layer_ids`.
///
/// To override the contents of this collection use [`set_layer_ids`](Self::set_layer_ids).
///
/// <p>A list of layer IDs. The action describes the Elastic Load Balancing instances for the specified layers.</p>
pub fn layer_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.layer_ids.unwrap_or_default();
v.push(input.into());
self.layer_ids = Some(v);
self
}
/// <p>A list of layer IDs. The action describes the Elastic Load Balancing instances for the specified layers.</p>
pub fn set_layer_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.layer_ids = input;
self
}
/// Consumes the builder and constructs a [`DescribeElasticLoadBalancersInput`](crate::input::DescribeElasticLoadBalancersInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeElasticLoadBalancersInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeElasticLoadBalancersInput {
stack_id: self.stack_id,
layer_ids: self.layer_ids,
})
}
}
}
#[doc(hidden)]
pub type DescribeElasticLoadBalancersInputOperationOutputAlias =
crate::operation::DescribeElasticLoadBalancers;
#[doc(hidden)]
pub type DescribeElasticLoadBalancersInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeElasticLoadBalancersInput {
/// Consumes the builder and constructs an Operation<[`DescribeElasticLoadBalancers`](crate::operation::DescribeElasticLoadBalancers)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeElasticLoadBalancers,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeElasticLoadBalancersInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeElasticLoadBalancersInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeElasticLoadBalancersInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeElasticLoadBalancers",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_elastic_load_balancers(&self)?
;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeElasticLoadBalancers::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeElasticLoadBalancers",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeElasticLoadBalancersInput`](crate::input::DescribeElasticLoadBalancersInput)
pub fn builder() -> crate::input::describe_elastic_load_balancers_input::Builder {
crate::input::describe_elastic_load_balancers_input::Builder::default()
}
}
/// See [`DescribeInstancesInput`](crate::input::DescribeInstancesInput)
pub mod describe_instances_input {
/// A builder for [`DescribeInstancesInput`](crate::input::DescribeInstancesInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) layer_id: std::option::Option<std::string::String>,
pub(crate) instance_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl Builder {
/// <p>A stack ID. If you use this parameter, <code>DescribeInstances</code> returns descriptions of
/// the instances associated with the specified stack.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>A stack ID. If you use this parameter, <code>DescribeInstances</code> returns descriptions of
/// the instances associated with the specified stack.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// <p>A layer ID. If you use this parameter, <code>DescribeInstances</code> returns descriptions of
/// the instances associated with the specified layer.</p>
pub fn layer_id(mut self, input: impl Into<std::string::String>) -> Self {
self.layer_id = Some(input.into());
self
}
/// <p>A layer ID. If you use this parameter, <code>DescribeInstances</code> returns descriptions of
/// the instances associated with the specified layer.</p>
pub fn set_layer_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.layer_id = input;
self
}
/// Appends an item to `instance_ids`.
///
/// To override the contents of this collection use [`set_instance_ids`](Self::set_instance_ids).
///
/// <p>An array of instance IDs to be described. If you use this parameter,
/// <code>DescribeInstances</code> returns a description of the specified instances. Otherwise,
/// it returns a description of every instance.</p>
pub fn instance_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.instance_ids.unwrap_or_default();
v.push(input.into());
self.instance_ids = Some(v);
self
}
/// <p>An array of instance IDs to be described. If you use this parameter,
/// <code>DescribeInstances</code> returns a description of the specified instances. Otherwise,
/// it returns a description of every instance.</p>
pub fn set_instance_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.instance_ids = input;
self
}
/// Consumes the builder and constructs a [`DescribeInstancesInput`](crate::input::DescribeInstancesInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeInstancesInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeInstancesInput {
stack_id: self.stack_id,
layer_id: self.layer_id,
instance_ids: self.instance_ids,
})
}
}
}
#[doc(hidden)]
pub type DescribeInstancesInputOperationOutputAlias = crate::operation::DescribeInstances;
#[doc(hidden)]
pub type DescribeInstancesInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeInstancesInput {
/// Consumes the builder and constructs an Operation<[`DescribeInstances`](crate::operation::DescribeInstances)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeInstances,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeInstancesInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeInstancesInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeInstancesInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeInstances",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_instances(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeInstances::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeInstances",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeInstancesInput`](crate::input::DescribeInstancesInput)
pub fn builder() -> crate::input::describe_instances_input::Builder {
crate::input::describe_instances_input::Builder::default()
}
}
/// See [`DescribeLayersInput`](crate::input::DescribeLayersInput)
pub mod describe_layers_input {
/// A builder for [`DescribeLayersInput`](crate::input::DescribeLayersInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) layer_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl Builder {
/// <p>The stack ID.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// Appends an item to `layer_ids`.
///
/// To override the contents of this collection use [`set_layer_ids`](Self::set_layer_ids).
///
/// <p>An array of layer IDs that specify the layers to be described. If you omit this parameter,
/// <code>DescribeLayers</code> returns a description of every layer in the specified stack.</p>
pub fn layer_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.layer_ids.unwrap_or_default();
v.push(input.into());
self.layer_ids = Some(v);
self
}
/// <p>An array of layer IDs that specify the layers to be described. If you omit this parameter,
/// <code>DescribeLayers</code> returns a description of every layer in the specified stack.</p>
pub fn set_layer_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.layer_ids = input;
self
}
/// Consumes the builder and constructs a [`DescribeLayersInput`](crate::input::DescribeLayersInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeLayersInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeLayersInput {
stack_id: self.stack_id,
layer_ids: self.layer_ids,
})
}
}
}
#[doc(hidden)]
pub type DescribeLayersInputOperationOutputAlias = crate::operation::DescribeLayers;
#[doc(hidden)]
pub type DescribeLayersInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeLayersInput {
/// Consumes the builder and constructs an Operation<[`DescribeLayers`](crate::operation::DescribeLayers)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeLayers,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeLayersInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeLayersInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeLayersInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeLayers",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_layers(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeLayers::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeLayers",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeLayersInput`](crate::input::DescribeLayersInput)
pub fn builder() -> crate::input::describe_layers_input::Builder {
crate::input::describe_layers_input::Builder::default()
}
}
/// See [`DescribeLoadBasedAutoScalingInput`](crate::input::DescribeLoadBasedAutoScalingInput)
pub mod describe_load_based_auto_scaling_input {
/// A builder for [`DescribeLoadBasedAutoScalingInput`](crate::input::DescribeLoadBasedAutoScalingInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) layer_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl Builder {
/// Appends an item to `layer_ids`.
///
/// To override the contents of this collection use [`set_layer_ids`](Self::set_layer_ids).
///
/// <p>An array of layer IDs.</p>
pub fn layer_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.layer_ids.unwrap_or_default();
v.push(input.into());
self.layer_ids = Some(v);
self
}
/// <p>An array of layer IDs.</p>
pub fn set_layer_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.layer_ids = input;
self
}
/// Consumes the builder and constructs a [`DescribeLoadBasedAutoScalingInput`](crate::input::DescribeLoadBasedAutoScalingInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeLoadBasedAutoScalingInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeLoadBasedAutoScalingInput {
layer_ids: self.layer_ids,
})
}
}
}
#[doc(hidden)]
pub type DescribeLoadBasedAutoScalingInputOperationOutputAlias =
crate::operation::DescribeLoadBasedAutoScaling;
#[doc(hidden)]
pub type DescribeLoadBasedAutoScalingInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeLoadBasedAutoScalingInput {
/// Consumes the builder and constructs an Operation<[`DescribeLoadBasedAutoScaling`](crate::operation::DescribeLoadBasedAutoScaling)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeLoadBasedAutoScaling,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeLoadBasedAutoScalingInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeLoadBasedAutoScalingInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeLoadBasedAutoScalingInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeLoadBasedAutoScaling",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_load_based_auto_scaling(&self)?
;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeLoadBasedAutoScaling::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeLoadBasedAutoScaling",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeLoadBasedAutoScalingInput`](crate::input::DescribeLoadBasedAutoScalingInput)
pub fn builder() -> crate::input::describe_load_based_auto_scaling_input::Builder {
crate::input::describe_load_based_auto_scaling_input::Builder::default()
}
}
/// See [`DescribeMyUserProfileInput`](crate::input::DescribeMyUserProfileInput)
pub mod describe_my_user_profile_input {
/// A builder for [`DescribeMyUserProfileInput`](crate::input::DescribeMyUserProfileInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {}
impl Builder {
/// Consumes the builder and constructs a [`DescribeMyUserProfileInput`](crate::input::DescribeMyUserProfileInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeMyUserProfileInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeMyUserProfileInput {})
}
}
}
#[doc(hidden)]
pub type DescribeMyUserProfileInputOperationOutputAlias = crate::operation::DescribeMyUserProfile;
#[doc(hidden)]
pub type DescribeMyUserProfileInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeMyUserProfileInput {
/// Consumes the builder and constructs an Operation<[`DescribeMyUserProfile`](crate::operation::DescribeMyUserProfile)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeMyUserProfile,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeMyUserProfileInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeMyUserProfileInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeMyUserProfileInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeMyUserProfile",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_my_user_profile(
&self,
)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeMyUserProfile::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeMyUserProfile",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeMyUserProfileInput`](crate::input::DescribeMyUserProfileInput)
pub fn builder() -> crate::input::describe_my_user_profile_input::Builder {
crate::input::describe_my_user_profile_input::Builder::default()
}
}
/// See [`DescribeOperatingSystemsInput`](crate::input::DescribeOperatingSystemsInput)
pub mod describe_operating_systems_input {
/// A builder for [`DescribeOperatingSystemsInput`](crate::input::DescribeOperatingSystemsInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {}
impl Builder {
/// Consumes the builder and constructs a [`DescribeOperatingSystemsInput`](crate::input::DescribeOperatingSystemsInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeOperatingSystemsInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeOperatingSystemsInput {})
}
}
}
#[doc(hidden)]
pub type DescribeOperatingSystemsInputOperationOutputAlias =
crate::operation::DescribeOperatingSystems;
#[doc(hidden)]
pub type DescribeOperatingSystemsInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeOperatingSystemsInput {
/// Consumes the builder and constructs an Operation<[`DescribeOperatingSystems`](crate::operation::DescribeOperatingSystems)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeOperatingSystems,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeOperatingSystemsInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeOperatingSystemsInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeOperatingSystemsInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeOperatingSystems",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_operating_systems(
&self,
)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeOperatingSystems::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeOperatingSystems",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeOperatingSystemsInput`](crate::input::DescribeOperatingSystemsInput)
pub fn builder() -> crate::input::describe_operating_systems_input::Builder {
crate::input::describe_operating_systems_input::Builder::default()
}
}
/// See [`DescribePermissionsInput`](crate::input::DescribePermissionsInput)
pub mod describe_permissions_input {
/// A builder for [`DescribePermissionsInput`](crate::input::DescribePermissionsInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) iam_user_arn: std::option::Option<std::string::String>,
pub(crate) stack_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The user's IAM ARN. This can also be a federated user's ARN. For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub fn iam_user_arn(mut self, input: impl Into<std::string::String>) -> Self {
self.iam_user_arn = Some(input.into());
self
}
/// <p>The user's IAM ARN. This can also be a federated user's ARN. For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub fn set_iam_user_arn(mut self, input: std::option::Option<std::string::String>) -> Self {
self.iam_user_arn = input;
self
}
/// <p>The stack ID.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// Consumes the builder and constructs a [`DescribePermissionsInput`](crate::input::DescribePermissionsInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribePermissionsInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribePermissionsInput {
iam_user_arn: self.iam_user_arn,
stack_id: self.stack_id,
})
}
}
}
#[doc(hidden)]
pub type DescribePermissionsInputOperationOutputAlias = crate::operation::DescribePermissions;
#[doc(hidden)]
pub type DescribePermissionsInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribePermissionsInput {
/// Consumes the builder and constructs an Operation<[`DescribePermissions`](crate::operation::DescribePermissions)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribePermissions,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribePermissionsInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribePermissionsInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribePermissionsInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribePermissions",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_permissions(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribePermissions::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribePermissions",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribePermissionsInput`](crate::input::DescribePermissionsInput)
pub fn builder() -> crate::input::describe_permissions_input::Builder {
crate::input::describe_permissions_input::Builder::default()
}
}
/// See [`DescribeRaidArraysInput`](crate::input::DescribeRaidArraysInput)
pub mod describe_raid_arrays_input {
/// A builder for [`DescribeRaidArraysInput`](crate::input::DescribeRaidArraysInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) instance_id: std::option::Option<std::string::String>,
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) raid_array_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl Builder {
/// <p>The instance ID. If you use this parameter, <code>DescribeRaidArrays</code> returns
/// descriptions of the RAID arrays associated with the specified instance. </p>
pub fn instance_id(mut self, input: impl Into<std::string::String>) -> Self {
self.instance_id = Some(input.into());
self
}
/// <p>The instance ID. If you use this parameter, <code>DescribeRaidArrays</code> returns
/// descriptions of the RAID arrays associated with the specified instance. </p>
pub fn set_instance_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.instance_id = input;
self
}
/// <p>The stack ID.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// Appends an item to `raid_array_ids`.
///
/// To override the contents of this collection use [`set_raid_array_ids`](Self::set_raid_array_ids).
///
/// <p>An array of RAID array IDs. If you use this parameter, <code>DescribeRaidArrays</code>
/// returns descriptions of the specified arrays. Otherwise, it returns a description of every
/// array.</p>
pub fn raid_array_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.raid_array_ids.unwrap_or_default();
v.push(input.into());
self.raid_array_ids = Some(v);
self
}
/// <p>An array of RAID array IDs. If you use this parameter, <code>DescribeRaidArrays</code>
/// returns descriptions of the specified arrays. Otherwise, it returns a description of every
/// array.</p>
pub fn set_raid_array_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.raid_array_ids = input;
self
}
/// Consumes the builder and constructs a [`DescribeRaidArraysInput`](crate::input::DescribeRaidArraysInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeRaidArraysInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeRaidArraysInput {
instance_id: self.instance_id,
stack_id: self.stack_id,
raid_array_ids: self.raid_array_ids,
})
}
}
}
#[doc(hidden)]
pub type DescribeRaidArraysInputOperationOutputAlias = crate::operation::DescribeRaidArrays;
#[doc(hidden)]
pub type DescribeRaidArraysInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeRaidArraysInput {
/// Consumes the builder and constructs an Operation<[`DescribeRaidArrays`](crate::operation::DescribeRaidArrays)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeRaidArrays,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeRaidArraysInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeRaidArraysInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeRaidArraysInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeRaidArrays",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_raid_arrays(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeRaidArrays::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeRaidArrays",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeRaidArraysInput`](crate::input::DescribeRaidArraysInput)
pub fn builder() -> crate::input::describe_raid_arrays_input::Builder {
crate::input::describe_raid_arrays_input::Builder::default()
}
}
/// See [`DescribeRdsDbInstancesInput`](crate::input::DescribeRdsDbInstancesInput)
pub mod describe_rds_db_instances_input {
/// A builder for [`DescribeRdsDbInstancesInput`](crate::input::DescribeRdsDbInstancesInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) rds_db_instance_arns: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl Builder {
/// <p>The ID of the stack with which the instances are registered. The operation returns descriptions of all registered Amazon RDS instances.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The ID of the stack with which the instances are registered. The operation returns descriptions of all registered Amazon RDS instances.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// Appends an item to `rds_db_instance_arns`.
///
/// To override the contents of this collection use [`set_rds_db_instance_arns`](Self::set_rds_db_instance_arns).
///
/// <p>An array containing the ARNs of the instances to be described.</p>
pub fn rds_db_instance_arns(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.rds_db_instance_arns.unwrap_or_default();
v.push(input.into());
self.rds_db_instance_arns = Some(v);
self
}
/// <p>An array containing the ARNs of the instances to be described.</p>
pub fn set_rds_db_instance_arns(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.rds_db_instance_arns = input;
self
}
/// Consumes the builder and constructs a [`DescribeRdsDbInstancesInput`](crate::input::DescribeRdsDbInstancesInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeRdsDbInstancesInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeRdsDbInstancesInput {
stack_id: self.stack_id,
rds_db_instance_arns: self.rds_db_instance_arns,
})
}
}
}
#[doc(hidden)]
pub type DescribeRdsDbInstancesInputOperationOutputAlias = crate::operation::DescribeRdsDbInstances;
#[doc(hidden)]
pub type DescribeRdsDbInstancesInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeRdsDbInstancesInput {
/// Consumes the builder and constructs an Operation<[`DescribeRdsDbInstances`](crate::operation::DescribeRdsDbInstances)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeRdsDbInstances,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeRdsDbInstancesInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeRdsDbInstancesInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeRdsDbInstancesInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeRdsDbInstances",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_rds_db_instances(
&self,
)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeRdsDbInstances::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeRdsDbInstances",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeRdsDbInstancesInput`](crate::input::DescribeRdsDbInstancesInput)
pub fn builder() -> crate::input::describe_rds_db_instances_input::Builder {
crate::input::describe_rds_db_instances_input::Builder::default()
}
}
/// See [`DescribeServiceErrorsInput`](crate::input::DescribeServiceErrorsInput)
pub mod describe_service_errors_input {
/// A builder for [`DescribeServiceErrorsInput`](crate::input::DescribeServiceErrorsInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) instance_id: std::option::Option<std::string::String>,
pub(crate) service_error_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl Builder {
/// <p>The stack ID. If you use this parameter, <code>DescribeServiceErrors</code> returns
/// descriptions of the errors associated with the specified stack.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID. If you use this parameter, <code>DescribeServiceErrors</code> returns
/// descriptions of the errors associated with the specified stack.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// <p>The instance ID. If you use this parameter, <code>DescribeServiceErrors</code> returns
/// descriptions of the errors associated with the specified instance.</p>
pub fn instance_id(mut self, input: impl Into<std::string::String>) -> Self {
self.instance_id = Some(input.into());
self
}
/// <p>The instance ID. If you use this parameter, <code>DescribeServiceErrors</code> returns
/// descriptions of the errors associated with the specified instance.</p>
pub fn set_instance_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.instance_id = input;
self
}
/// Appends an item to `service_error_ids`.
///
/// To override the contents of this collection use [`set_service_error_ids`](Self::set_service_error_ids).
///
/// <p>An array of service error IDs. If you use this parameter, <code>DescribeServiceErrors</code>
/// returns descriptions of the specified errors. Otherwise, it returns a description of every
/// error.</p>
pub fn service_error_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.service_error_ids.unwrap_or_default();
v.push(input.into());
self.service_error_ids = Some(v);
self
}
/// <p>An array of service error IDs. If you use this parameter, <code>DescribeServiceErrors</code>
/// returns descriptions of the specified errors. Otherwise, it returns a description of every
/// error.</p>
pub fn set_service_error_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.service_error_ids = input;
self
}
/// Consumes the builder and constructs a [`DescribeServiceErrorsInput`](crate::input::DescribeServiceErrorsInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeServiceErrorsInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeServiceErrorsInput {
stack_id: self.stack_id,
instance_id: self.instance_id,
service_error_ids: self.service_error_ids,
})
}
}
}
#[doc(hidden)]
pub type DescribeServiceErrorsInputOperationOutputAlias = crate::operation::DescribeServiceErrors;
#[doc(hidden)]
pub type DescribeServiceErrorsInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeServiceErrorsInput {
/// Consumes the builder and constructs an Operation<[`DescribeServiceErrors`](crate::operation::DescribeServiceErrors)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeServiceErrors,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeServiceErrorsInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeServiceErrorsInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeServiceErrorsInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeServiceErrors",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_service_errors(
&self,
)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeServiceErrors::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeServiceErrors",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeServiceErrorsInput`](crate::input::DescribeServiceErrorsInput)
pub fn builder() -> crate::input::describe_service_errors_input::Builder {
crate::input::describe_service_errors_input::Builder::default()
}
}
/// See [`DescribeStackProvisioningParametersInput`](crate::input::DescribeStackProvisioningParametersInput)
pub mod describe_stack_provisioning_parameters_input {
/// A builder for [`DescribeStackProvisioningParametersInput`](crate::input::DescribeStackProvisioningParametersInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The stack ID.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// Consumes the builder and constructs a [`DescribeStackProvisioningParametersInput`](crate::input::DescribeStackProvisioningParametersInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeStackProvisioningParametersInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeStackProvisioningParametersInput {
stack_id: self.stack_id,
})
}
}
}
#[doc(hidden)]
pub type DescribeStackProvisioningParametersInputOperationOutputAlias =
crate::operation::DescribeStackProvisioningParameters;
#[doc(hidden)]
pub type DescribeStackProvisioningParametersInputOperationRetryAlias =
aws_http::AwsErrorRetryPolicy;
impl DescribeStackProvisioningParametersInput {
/// Consumes the builder and constructs an Operation<[`DescribeStackProvisioningParameters`](crate::operation::DescribeStackProvisioningParameters)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeStackProvisioningParameters,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeStackProvisioningParametersInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeStackProvisioningParametersInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeStackProvisioningParametersInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeStackProvisioningParameters",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_stack_provisioning_parameters(&self)?
;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeStackProvisioningParameters::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeStackProvisioningParameters",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeStackProvisioningParametersInput`](crate::input::DescribeStackProvisioningParametersInput)
pub fn builder() -> crate::input::describe_stack_provisioning_parameters_input::Builder {
crate::input::describe_stack_provisioning_parameters_input::Builder::default()
}
}
/// See [`DescribeStacksInput`](crate::input::DescribeStacksInput)
pub mod describe_stacks_input {
/// A builder for [`DescribeStacksInput`](crate::input::DescribeStacksInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl Builder {
/// Appends an item to `stack_ids`.
///
/// To override the contents of this collection use [`set_stack_ids`](Self::set_stack_ids).
///
/// <p>An array of stack IDs that specify the stacks to be described. If you omit this parameter,
/// <code>DescribeStacks</code> returns a description of every stack.</p>
pub fn stack_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.stack_ids.unwrap_or_default();
v.push(input.into());
self.stack_ids = Some(v);
self
}
/// <p>An array of stack IDs that specify the stacks to be described. If you omit this parameter,
/// <code>DescribeStacks</code> returns a description of every stack.</p>
pub fn set_stack_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.stack_ids = input;
self
}
/// Consumes the builder and constructs a [`DescribeStacksInput`](crate::input::DescribeStacksInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeStacksInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeStacksInput {
stack_ids: self.stack_ids,
})
}
}
}
#[doc(hidden)]
pub type DescribeStacksInputOperationOutputAlias = crate::operation::DescribeStacks;
#[doc(hidden)]
pub type DescribeStacksInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeStacksInput {
/// Consumes the builder and constructs an Operation<[`DescribeStacks`](crate::operation::DescribeStacks)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeStacks,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeStacksInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeStacksInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeStacksInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeStacks",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_stacks(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeStacks::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeStacks",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeStacksInput`](crate::input::DescribeStacksInput)
pub fn builder() -> crate::input::describe_stacks_input::Builder {
crate::input::describe_stacks_input::Builder::default()
}
}
/// See [`DescribeStackSummaryInput`](crate::input::DescribeStackSummaryInput)
pub mod describe_stack_summary_input {
/// A builder for [`DescribeStackSummaryInput`](crate::input::DescribeStackSummaryInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The stack ID.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// Consumes the builder and constructs a [`DescribeStackSummaryInput`](crate::input::DescribeStackSummaryInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeStackSummaryInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeStackSummaryInput {
stack_id: self.stack_id,
})
}
}
}
#[doc(hidden)]
pub type DescribeStackSummaryInputOperationOutputAlias = crate::operation::DescribeStackSummary;
#[doc(hidden)]
pub type DescribeStackSummaryInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeStackSummaryInput {
/// Consumes the builder and constructs an Operation<[`DescribeStackSummary`](crate::operation::DescribeStackSummary)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeStackSummary,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeStackSummaryInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeStackSummaryInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeStackSummaryInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeStackSummary",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_stack_summary(
&self,
)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeStackSummary::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeStackSummary",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeStackSummaryInput`](crate::input::DescribeStackSummaryInput)
pub fn builder() -> crate::input::describe_stack_summary_input::Builder {
crate::input::describe_stack_summary_input::Builder::default()
}
}
/// See [`DescribeTimeBasedAutoScalingInput`](crate::input::DescribeTimeBasedAutoScalingInput)
pub mod describe_time_based_auto_scaling_input {
/// A builder for [`DescribeTimeBasedAutoScalingInput`](crate::input::DescribeTimeBasedAutoScalingInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) instance_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl Builder {
/// Appends an item to `instance_ids`.
///
/// To override the contents of this collection use [`set_instance_ids`](Self::set_instance_ids).
///
/// <p>An array of instance IDs.</p>
pub fn instance_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.instance_ids.unwrap_or_default();
v.push(input.into());
self.instance_ids = Some(v);
self
}
/// <p>An array of instance IDs.</p>
pub fn set_instance_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.instance_ids = input;
self
}
/// Consumes the builder and constructs a [`DescribeTimeBasedAutoScalingInput`](crate::input::DescribeTimeBasedAutoScalingInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeTimeBasedAutoScalingInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeTimeBasedAutoScalingInput {
instance_ids: self.instance_ids,
})
}
}
}
#[doc(hidden)]
pub type DescribeTimeBasedAutoScalingInputOperationOutputAlias =
crate::operation::DescribeTimeBasedAutoScaling;
#[doc(hidden)]
pub type DescribeTimeBasedAutoScalingInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeTimeBasedAutoScalingInput {
/// Consumes the builder and constructs an Operation<[`DescribeTimeBasedAutoScaling`](crate::operation::DescribeTimeBasedAutoScaling)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeTimeBasedAutoScaling,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeTimeBasedAutoScalingInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeTimeBasedAutoScalingInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeTimeBasedAutoScalingInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeTimeBasedAutoScaling",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_time_based_auto_scaling(&self)?
;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeTimeBasedAutoScaling::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeTimeBasedAutoScaling",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeTimeBasedAutoScalingInput`](crate::input::DescribeTimeBasedAutoScalingInput)
pub fn builder() -> crate::input::describe_time_based_auto_scaling_input::Builder {
crate::input::describe_time_based_auto_scaling_input::Builder::default()
}
}
/// See [`DescribeUserProfilesInput`](crate::input::DescribeUserProfilesInput)
pub mod describe_user_profiles_input {
/// A builder for [`DescribeUserProfilesInput`](crate::input::DescribeUserProfilesInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) iam_user_arns: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl Builder {
/// Appends an item to `iam_user_arns`.
///
/// To override the contents of this collection use [`set_iam_user_arns`](Self::set_iam_user_arns).
///
/// <p>An array of IAM or federated user ARNs that identify the users to be described.</p>
pub fn iam_user_arns(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.iam_user_arns.unwrap_or_default();
v.push(input.into());
self.iam_user_arns = Some(v);
self
}
/// <p>An array of IAM or federated user ARNs that identify the users to be described.</p>
pub fn set_iam_user_arns(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.iam_user_arns = input;
self
}
/// Consumes the builder and constructs a [`DescribeUserProfilesInput`](crate::input::DescribeUserProfilesInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeUserProfilesInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeUserProfilesInput {
iam_user_arns: self.iam_user_arns,
})
}
}
}
#[doc(hidden)]
pub type DescribeUserProfilesInputOperationOutputAlias = crate::operation::DescribeUserProfiles;
#[doc(hidden)]
pub type DescribeUserProfilesInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeUserProfilesInput {
/// Consumes the builder and constructs an Operation<[`DescribeUserProfiles`](crate::operation::DescribeUserProfiles)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeUserProfiles,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeUserProfilesInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeUserProfilesInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeUserProfilesInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeUserProfiles",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_user_profiles(
&self,
)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeUserProfiles::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeUserProfiles",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeUserProfilesInput`](crate::input::DescribeUserProfilesInput)
pub fn builder() -> crate::input::describe_user_profiles_input::Builder {
crate::input::describe_user_profiles_input::Builder::default()
}
}
/// See [`DescribeVolumesInput`](crate::input::DescribeVolumesInput)
pub mod describe_volumes_input {
/// A builder for [`DescribeVolumesInput`](crate::input::DescribeVolumesInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) instance_id: std::option::Option<std::string::String>,
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) raid_array_id: std::option::Option<std::string::String>,
pub(crate) volume_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl Builder {
/// <p>The instance ID. If you use this parameter, <code>DescribeVolumes</code> returns descriptions
/// of the volumes associated with the specified instance.</p>
pub fn instance_id(mut self, input: impl Into<std::string::String>) -> Self {
self.instance_id = Some(input.into());
self
}
/// <p>The instance ID. If you use this parameter, <code>DescribeVolumes</code> returns descriptions
/// of the volumes associated with the specified instance.</p>
pub fn set_instance_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.instance_id = input;
self
}
/// <p>A stack ID. The action describes the stack's registered Amazon EBS volumes.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>A stack ID. The action describes the stack's registered Amazon EBS volumes.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// <p>The RAID array ID. If you use this parameter, <code>DescribeVolumes</code> returns
/// descriptions of the volumes associated with the specified RAID array.</p>
pub fn raid_array_id(mut self, input: impl Into<std::string::String>) -> Self {
self.raid_array_id = Some(input.into());
self
}
/// <p>The RAID array ID. If you use this parameter, <code>DescribeVolumes</code> returns
/// descriptions of the volumes associated with the specified RAID array.</p>
pub fn set_raid_array_id(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.raid_array_id = input;
self
}
/// Appends an item to `volume_ids`.
///
/// To override the contents of this collection use [`set_volume_ids`](Self::set_volume_ids).
///
/// <p>Am array of volume IDs. If you use this parameter, <code>DescribeVolumes</code> returns
/// descriptions of the specified volumes. Otherwise, it returns a description of every
/// volume.</p>
pub fn volume_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.volume_ids.unwrap_or_default();
v.push(input.into());
self.volume_ids = Some(v);
self
}
/// <p>Am array of volume IDs. If you use this parameter, <code>DescribeVolumes</code> returns
/// descriptions of the specified volumes. Otherwise, it returns a description of every
/// volume.</p>
pub fn set_volume_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.volume_ids = input;
self
}
/// Consumes the builder and constructs a [`DescribeVolumesInput`](crate::input::DescribeVolumesInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DescribeVolumesInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DescribeVolumesInput {
instance_id: self.instance_id,
stack_id: self.stack_id,
raid_array_id: self.raid_array_id,
volume_ids: self.volume_ids,
})
}
}
}
#[doc(hidden)]
pub type DescribeVolumesInputOperationOutputAlias = crate::operation::DescribeVolumes;
#[doc(hidden)]
pub type DescribeVolumesInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DescribeVolumesInput {
/// Consumes the builder and constructs an Operation<[`DescribeVolumes`](crate::operation::DescribeVolumes)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DescribeVolumes,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DescribeVolumesInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DescribeVolumesInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DescribeVolumesInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DescribeVolumes",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_describe_volumes(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DescribeVolumes::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DescribeVolumes",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DescribeVolumesInput`](crate::input::DescribeVolumesInput)
pub fn builder() -> crate::input::describe_volumes_input::Builder {
crate::input::describe_volumes_input::Builder::default()
}
}
/// See [`DetachElasticLoadBalancerInput`](crate::input::DetachElasticLoadBalancerInput)
pub mod detach_elastic_load_balancer_input {
/// A builder for [`DetachElasticLoadBalancerInput`](crate::input::DetachElasticLoadBalancerInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) elastic_load_balancer_name: std::option::Option<std::string::String>,
pub(crate) layer_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The Elastic Load Balancing instance's name.</p>
pub fn elastic_load_balancer_name(mut self, input: impl Into<std::string::String>) -> Self {
self.elastic_load_balancer_name = Some(input.into());
self
}
/// <p>The Elastic Load Balancing instance's name.</p>
pub fn set_elastic_load_balancer_name(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.elastic_load_balancer_name = input;
self
}
/// <p>The ID of the layer that the Elastic Load Balancing instance is attached to.</p>
pub fn layer_id(mut self, input: impl Into<std::string::String>) -> Self {
self.layer_id = Some(input.into());
self
}
/// <p>The ID of the layer that the Elastic Load Balancing instance is attached to.</p>
pub fn set_layer_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.layer_id = input;
self
}
/// Consumes the builder and constructs a [`DetachElasticLoadBalancerInput`](crate::input::DetachElasticLoadBalancerInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DetachElasticLoadBalancerInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DetachElasticLoadBalancerInput {
elastic_load_balancer_name: self.elastic_load_balancer_name,
layer_id: self.layer_id,
})
}
}
}
#[doc(hidden)]
pub type DetachElasticLoadBalancerInputOperationOutputAlias =
crate::operation::DetachElasticLoadBalancer;
#[doc(hidden)]
pub type DetachElasticLoadBalancerInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DetachElasticLoadBalancerInput {
/// Consumes the builder and constructs an Operation<[`DetachElasticLoadBalancer`](crate::operation::DetachElasticLoadBalancer)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DetachElasticLoadBalancer,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DetachElasticLoadBalancerInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DetachElasticLoadBalancerInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DetachElasticLoadBalancerInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DetachElasticLoadBalancer",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_detach_elastic_load_balancer(
&self,
)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DetachElasticLoadBalancer::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DetachElasticLoadBalancer",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DetachElasticLoadBalancerInput`](crate::input::DetachElasticLoadBalancerInput)
pub fn builder() -> crate::input::detach_elastic_load_balancer_input::Builder {
crate::input::detach_elastic_load_balancer_input::Builder::default()
}
}
/// See [`DisassociateElasticIpInput`](crate::input::DisassociateElasticIpInput)
pub mod disassociate_elastic_ip_input {
/// A builder for [`DisassociateElasticIpInput`](crate::input::DisassociateElasticIpInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) elastic_ip: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The Elastic IP address.</p>
pub fn elastic_ip(mut self, input: impl Into<std::string::String>) -> Self {
self.elastic_ip = Some(input.into());
self
}
/// <p>The Elastic IP address.</p>
pub fn set_elastic_ip(mut self, input: std::option::Option<std::string::String>) -> Self {
self.elastic_ip = input;
self
}
/// Consumes the builder and constructs a [`DisassociateElasticIpInput`](crate::input::DisassociateElasticIpInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::DisassociateElasticIpInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::DisassociateElasticIpInput {
elastic_ip: self.elastic_ip,
})
}
}
}
#[doc(hidden)]
pub type DisassociateElasticIpInputOperationOutputAlias = crate::operation::DisassociateElasticIp;
#[doc(hidden)]
pub type DisassociateElasticIpInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl DisassociateElasticIpInput {
/// Consumes the builder and constructs an Operation<[`DisassociateElasticIp`](crate::operation::DisassociateElasticIp)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::DisassociateElasticIp,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::DisassociateElasticIpInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::DisassociateElasticIpInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::DisassociateElasticIpInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.DisassociateElasticIp",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_disassociate_elastic_ip(
&self,
)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::DisassociateElasticIp::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"DisassociateElasticIp",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`DisassociateElasticIpInput`](crate::input::DisassociateElasticIpInput)
pub fn builder() -> crate::input::disassociate_elastic_ip_input::Builder {
crate::input::disassociate_elastic_ip_input::Builder::default()
}
}
/// See [`GetHostnameSuggestionInput`](crate::input::GetHostnameSuggestionInput)
pub mod get_hostname_suggestion_input {
/// A builder for [`GetHostnameSuggestionInput`](crate::input::GetHostnameSuggestionInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) layer_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The layer ID.</p>
pub fn layer_id(mut self, input: impl Into<std::string::String>) -> Self {
self.layer_id = Some(input.into());
self
}
/// <p>The layer ID.</p>
pub fn set_layer_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.layer_id = input;
self
}
/// Consumes the builder and constructs a [`GetHostnameSuggestionInput`](crate::input::GetHostnameSuggestionInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::GetHostnameSuggestionInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::GetHostnameSuggestionInput {
layer_id: self.layer_id,
})
}
}
}
#[doc(hidden)]
pub type GetHostnameSuggestionInputOperationOutputAlias = crate::operation::GetHostnameSuggestion;
#[doc(hidden)]
pub type GetHostnameSuggestionInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl GetHostnameSuggestionInput {
/// Consumes the builder and constructs an Operation<[`GetHostnameSuggestion`](crate::operation::GetHostnameSuggestion)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::GetHostnameSuggestion,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::GetHostnameSuggestionInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::GetHostnameSuggestionInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::GetHostnameSuggestionInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.GetHostnameSuggestion",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_get_hostname_suggestion(
&self,
)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::GetHostnameSuggestion::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"GetHostnameSuggestion",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`GetHostnameSuggestionInput`](crate::input::GetHostnameSuggestionInput)
pub fn builder() -> crate::input::get_hostname_suggestion_input::Builder {
crate::input::get_hostname_suggestion_input::Builder::default()
}
}
/// See [`GrantAccessInput`](crate::input::GrantAccessInput)
pub mod grant_access_input {
/// A builder for [`GrantAccessInput`](crate::input::GrantAccessInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) instance_id: std::option::Option<std::string::String>,
pub(crate) valid_for_in_minutes: std::option::Option<i32>,
}
impl Builder {
/// <p>The instance's AWS OpsWorks Stacks ID.</p>
pub fn instance_id(mut self, input: impl Into<std::string::String>) -> Self {
self.instance_id = Some(input.into());
self
}
/// <p>The instance's AWS OpsWorks Stacks ID.</p>
pub fn set_instance_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.instance_id = input;
self
}
/// <p>The length of time (in minutes) that the grant is valid. When the grant expires at the end of this period, the user will no longer be able to use the credentials to log in. If the user is logged in at the time, he or she automatically will be logged out.</p>
pub fn valid_for_in_minutes(mut self, input: i32) -> Self {
self.valid_for_in_minutes = Some(input);
self
}
/// <p>The length of time (in minutes) that the grant is valid. When the grant expires at the end of this period, the user will no longer be able to use the credentials to log in. If the user is logged in at the time, he or she automatically will be logged out.</p>
pub fn set_valid_for_in_minutes(mut self, input: std::option::Option<i32>) -> Self {
self.valid_for_in_minutes = input;
self
}
/// Consumes the builder and constructs a [`GrantAccessInput`](crate::input::GrantAccessInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::GrantAccessInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::GrantAccessInput {
instance_id: self.instance_id,
valid_for_in_minutes: self.valid_for_in_minutes,
})
}
}
}
#[doc(hidden)]
pub type GrantAccessInputOperationOutputAlias = crate::operation::GrantAccess;
#[doc(hidden)]
pub type GrantAccessInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl GrantAccessInput {
/// Consumes the builder and constructs an Operation<[`GrantAccess`](crate::operation::GrantAccess)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::GrantAccess,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::GrantAccessInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::GrantAccessInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::GrantAccessInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.GrantAccess",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_grant_access(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::GrantAccess::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"GrantAccess",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`GrantAccessInput`](crate::input::GrantAccessInput)
pub fn builder() -> crate::input::grant_access_input::Builder {
crate::input::grant_access_input::Builder::default()
}
}
/// See [`ListTagsInput`](crate::input::ListTagsInput)
pub mod list_tags_input {
/// A builder for [`ListTagsInput`](crate::input::ListTagsInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) resource_arn: std::option::Option<std::string::String>,
pub(crate) max_results: std::option::Option<i32>,
pub(crate) next_token: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The stack or layer's Amazon Resource Number (ARN).</p>
pub fn resource_arn(mut self, input: impl Into<std::string::String>) -> Self {
self.resource_arn = Some(input.into());
self
}
/// <p>The stack or layer's Amazon Resource Number (ARN).</p>
pub fn set_resource_arn(mut self, input: std::option::Option<std::string::String>) -> Self {
self.resource_arn = input;
self
}
/// <p>Do not use. A validation exception occurs if you add a <code>MaxResults</code> parameter to a <code>ListTagsRequest</code> call.
/// </p>
pub fn max_results(mut self, input: i32) -> Self {
self.max_results = Some(input);
self
}
/// <p>Do not use. A validation exception occurs if you add a <code>MaxResults</code> parameter to a <code>ListTagsRequest</code> call.
/// </p>
pub fn set_max_results(mut self, input: std::option::Option<i32>) -> Self {
self.max_results = input;
self
}
/// <p>Do not use. A validation exception occurs if you add a <code>NextToken</code> parameter to a <code>ListTagsRequest</code> call.
/// </p>
pub fn next_token(mut self, input: impl Into<std::string::String>) -> Self {
self.next_token = Some(input.into());
self
}
/// <p>Do not use. A validation exception occurs if you add a <code>NextToken</code> parameter to a <code>ListTagsRequest</code> call.
/// </p>
pub fn set_next_token(mut self, input: std::option::Option<std::string::String>) -> Self {
self.next_token = input;
self
}
/// Consumes the builder and constructs a [`ListTagsInput`](crate::input::ListTagsInput)
pub fn build(
self,
) -> std::result::Result<crate::input::ListTagsInput, aws_smithy_http::operation::BuildError>
{
Ok(crate::input::ListTagsInput {
resource_arn: self.resource_arn,
max_results: self.max_results.unwrap_or_default(),
next_token: self.next_token,
})
}
}
}
#[doc(hidden)]
pub type ListTagsInputOperationOutputAlias = crate::operation::ListTags;
#[doc(hidden)]
pub type ListTagsInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl ListTagsInput {
/// Consumes the builder and constructs an Operation<[`ListTags`](crate::operation::ListTags)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::ListTags,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::ListTagsInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::ListTagsInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::ListTagsInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.ListTags",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_list_tags(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op =
aws_smithy_http::operation::Operation::new(request, crate::operation::ListTags::new())
.with_metadata(aws_smithy_http::operation::Metadata::new(
"ListTags", "opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`ListTagsInput`](crate::input::ListTagsInput)
pub fn builder() -> crate::input::list_tags_input::Builder {
crate::input::list_tags_input::Builder::default()
}
}
/// See [`RebootInstanceInput`](crate::input::RebootInstanceInput)
pub mod reboot_instance_input {
/// A builder for [`RebootInstanceInput`](crate::input::RebootInstanceInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) instance_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The instance ID.</p>
pub fn instance_id(mut self, input: impl Into<std::string::String>) -> Self {
self.instance_id = Some(input.into());
self
}
/// <p>The instance ID.</p>
pub fn set_instance_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.instance_id = input;
self
}
/// Consumes the builder and constructs a [`RebootInstanceInput`](crate::input::RebootInstanceInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::RebootInstanceInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::RebootInstanceInput {
instance_id: self.instance_id,
})
}
}
}
#[doc(hidden)]
pub type RebootInstanceInputOperationOutputAlias = crate::operation::RebootInstance;
#[doc(hidden)]
pub type RebootInstanceInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl RebootInstanceInput {
/// Consumes the builder and constructs an Operation<[`RebootInstance`](crate::operation::RebootInstance)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::RebootInstance,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::RebootInstanceInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::RebootInstanceInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::RebootInstanceInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.RebootInstance",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_reboot_instance(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::RebootInstance::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"RebootInstance",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`RebootInstanceInput`](crate::input::RebootInstanceInput)
pub fn builder() -> crate::input::reboot_instance_input::Builder {
crate::input::reboot_instance_input::Builder::default()
}
}
/// See [`RegisterEcsClusterInput`](crate::input::RegisterEcsClusterInput)
pub mod register_ecs_cluster_input {
/// A builder for [`RegisterEcsClusterInput`](crate::input::RegisterEcsClusterInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) ecs_cluster_arn: std::option::Option<std::string::String>,
pub(crate) stack_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The cluster's ARN.</p>
pub fn ecs_cluster_arn(mut self, input: impl Into<std::string::String>) -> Self {
self.ecs_cluster_arn = Some(input.into());
self
}
/// <p>The cluster's ARN.</p>
pub fn set_ecs_cluster_arn(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.ecs_cluster_arn = input;
self
}
/// <p>The stack ID.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// Consumes the builder and constructs a [`RegisterEcsClusterInput`](crate::input::RegisterEcsClusterInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::RegisterEcsClusterInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::RegisterEcsClusterInput {
ecs_cluster_arn: self.ecs_cluster_arn,
stack_id: self.stack_id,
})
}
}
}
#[doc(hidden)]
pub type RegisterEcsClusterInputOperationOutputAlias = crate::operation::RegisterEcsCluster;
#[doc(hidden)]
pub type RegisterEcsClusterInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl RegisterEcsClusterInput {
/// Consumes the builder and constructs an Operation<[`RegisterEcsCluster`](crate::operation::RegisterEcsCluster)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::RegisterEcsCluster,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::RegisterEcsClusterInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::RegisterEcsClusterInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::RegisterEcsClusterInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.RegisterEcsCluster",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_register_ecs_cluster(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::RegisterEcsCluster::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"RegisterEcsCluster",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`RegisterEcsClusterInput`](crate::input::RegisterEcsClusterInput)
pub fn builder() -> crate::input::register_ecs_cluster_input::Builder {
crate::input::register_ecs_cluster_input::Builder::default()
}
}
/// See [`RegisterElasticIpInput`](crate::input::RegisterElasticIpInput)
pub mod register_elastic_ip_input {
/// A builder for [`RegisterElasticIpInput`](crate::input::RegisterElasticIpInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) elastic_ip: std::option::Option<std::string::String>,
pub(crate) stack_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The Elastic IP address.</p>
pub fn elastic_ip(mut self, input: impl Into<std::string::String>) -> Self {
self.elastic_ip = Some(input.into());
self
}
/// <p>The Elastic IP address.</p>
pub fn set_elastic_ip(mut self, input: std::option::Option<std::string::String>) -> Self {
self.elastic_ip = input;
self
}
/// <p>The stack ID.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// Consumes the builder and constructs a [`RegisterElasticIpInput`](crate::input::RegisterElasticIpInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::RegisterElasticIpInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::RegisterElasticIpInput {
elastic_ip: self.elastic_ip,
stack_id: self.stack_id,
})
}
}
}
#[doc(hidden)]
pub type RegisterElasticIpInputOperationOutputAlias = crate::operation::RegisterElasticIp;
#[doc(hidden)]
pub type RegisterElasticIpInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl RegisterElasticIpInput {
/// Consumes the builder and constructs an Operation<[`RegisterElasticIp`](crate::operation::RegisterElasticIp)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::RegisterElasticIp,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::RegisterElasticIpInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::RegisterElasticIpInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::RegisterElasticIpInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.RegisterElasticIp",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_register_elastic_ip(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::RegisterElasticIp::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"RegisterElasticIp",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`RegisterElasticIpInput`](crate::input::RegisterElasticIpInput)
pub fn builder() -> crate::input::register_elastic_ip_input::Builder {
crate::input::register_elastic_ip_input::Builder::default()
}
}
/// See [`RegisterInstanceInput`](crate::input::RegisterInstanceInput)
pub mod register_instance_input {
/// A builder for [`RegisterInstanceInput`](crate::input::RegisterInstanceInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) hostname: std::option::Option<std::string::String>,
pub(crate) public_ip: std::option::Option<std::string::String>,
pub(crate) private_ip: std::option::Option<std::string::String>,
pub(crate) rsa_public_key: std::option::Option<std::string::String>,
pub(crate) rsa_public_key_fingerprint: std::option::Option<std::string::String>,
pub(crate) instance_identity: std::option::Option<crate::model::InstanceIdentity>,
}
impl Builder {
/// <p>The ID of the stack that the instance is to be registered with.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The ID of the stack that the instance is to be registered with.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// <p>The instance's hostname.</p>
pub fn hostname(mut self, input: impl Into<std::string::String>) -> Self {
self.hostname = Some(input.into());
self
}
/// <p>The instance's hostname.</p>
pub fn set_hostname(mut self, input: std::option::Option<std::string::String>) -> Self {
self.hostname = input;
self
}
/// <p>The instance's public IP address.</p>
pub fn public_ip(mut self, input: impl Into<std::string::String>) -> Self {
self.public_ip = Some(input.into());
self
}
/// <p>The instance's public IP address.</p>
pub fn set_public_ip(mut self, input: std::option::Option<std::string::String>) -> Self {
self.public_ip = input;
self
}
/// <p>The instance's private IP address.</p>
pub fn private_ip(mut self, input: impl Into<std::string::String>) -> Self {
self.private_ip = Some(input.into());
self
}
/// <p>The instance's private IP address.</p>
pub fn set_private_ip(mut self, input: std::option::Option<std::string::String>) -> Self {
self.private_ip = input;
self
}
/// <p>The instances public RSA key. This key is used to encrypt communication between the instance and the service.</p>
pub fn rsa_public_key(mut self, input: impl Into<std::string::String>) -> Self {
self.rsa_public_key = Some(input.into());
self
}
/// <p>The instances public RSA key. This key is used to encrypt communication between the instance and the service.</p>
pub fn set_rsa_public_key(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.rsa_public_key = input;
self
}
/// <p>The instances public RSA key fingerprint.</p>
pub fn rsa_public_key_fingerprint(mut self, input: impl Into<std::string::String>) -> Self {
self.rsa_public_key_fingerprint = Some(input.into());
self
}
/// <p>The instances public RSA key fingerprint.</p>
pub fn set_rsa_public_key_fingerprint(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.rsa_public_key_fingerprint = input;
self
}
/// <p>An InstanceIdentity object that contains the instance's identity.</p>
pub fn instance_identity(mut self, input: crate::model::InstanceIdentity) -> Self {
self.instance_identity = Some(input);
self
}
/// <p>An InstanceIdentity object that contains the instance's identity.</p>
pub fn set_instance_identity(
mut self,
input: std::option::Option<crate::model::InstanceIdentity>,
) -> Self {
self.instance_identity = input;
self
}
/// Consumes the builder and constructs a [`RegisterInstanceInput`](crate::input::RegisterInstanceInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::RegisterInstanceInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::RegisterInstanceInput {
stack_id: self.stack_id,
hostname: self.hostname,
public_ip: self.public_ip,
private_ip: self.private_ip,
rsa_public_key: self.rsa_public_key,
rsa_public_key_fingerprint: self.rsa_public_key_fingerprint,
instance_identity: self.instance_identity,
})
}
}
}
#[doc(hidden)]
pub type RegisterInstanceInputOperationOutputAlias = crate::operation::RegisterInstance;
#[doc(hidden)]
pub type RegisterInstanceInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl RegisterInstanceInput {
/// Consumes the builder and constructs an Operation<[`RegisterInstance`](crate::operation::RegisterInstance)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::RegisterInstance,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::RegisterInstanceInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::RegisterInstanceInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::RegisterInstanceInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.RegisterInstance",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_register_instance(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::RegisterInstance::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"RegisterInstance",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`RegisterInstanceInput`](crate::input::RegisterInstanceInput)
pub fn builder() -> crate::input::register_instance_input::Builder {
crate::input::register_instance_input::Builder::default()
}
}
/// See [`RegisterRdsDbInstanceInput`](crate::input::RegisterRdsDbInstanceInput)
pub mod register_rds_db_instance_input {
/// A builder for [`RegisterRdsDbInstanceInput`](crate::input::RegisterRdsDbInstanceInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) rds_db_instance_arn: std::option::Option<std::string::String>,
pub(crate) db_user: std::option::Option<std::string::String>,
pub(crate) db_password: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The stack ID.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// <p>The Amazon RDS instance's ARN.</p>
pub fn rds_db_instance_arn(mut self, input: impl Into<std::string::String>) -> Self {
self.rds_db_instance_arn = Some(input.into());
self
}
/// <p>The Amazon RDS instance's ARN.</p>
pub fn set_rds_db_instance_arn(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.rds_db_instance_arn = input;
self
}
/// <p>The database's master user name.</p>
pub fn db_user(mut self, input: impl Into<std::string::String>) -> Self {
self.db_user = Some(input.into());
self
}
/// <p>The database's master user name.</p>
pub fn set_db_user(mut self, input: std::option::Option<std::string::String>) -> Self {
self.db_user = input;
self
}
/// <p>The database password.</p>
pub fn db_password(mut self, input: impl Into<std::string::String>) -> Self {
self.db_password = Some(input.into());
self
}
/// <p>The database password.</p>
pub fn set_db_password(mut self, input: std::option::Option<std::string::String>) -> Self {
self.db_password = input;
self
}
/// Consumes the builder and constructs a [`RegisterRdsDbInstanceInput`](crate::input::RegisterRdsDbInstanceInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::RegisterRdsDbInstanceInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::RegisterRdsDbInstanceInput {
stack_id: self.stack_id,
rds_db_instance_arn: self.rds_db_instance_arn,
db_user: self.db_user,
db_password: self.db_password,
})
}
}
}
#[doc(hidden)]
pub type RegisterRdsDbInstanceInputOperationOutputAlias = crate::operation::RegisterRdsDbInstance;
#[doc(hidden)]
pub type RegisterRdsDbInstanceInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl RegisterRdsDbInstanceInput {
/// Consumes the builder and constructs an Operation<[`RegisterRdsDbInstance`](crate::operation::RegisterRdsDbInstance)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::RegisterRdsDbInstance,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::RegisterRdsDbInstanceInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::RegisterRdsDbInstanceInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::RegisterRdsDbInstanceInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.RegisterRdsDbInstance",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_register_rds_db_instance(
&self,
)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::RegisterRdsDbInstance::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"RegisterRdsDbInstance",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`RegisterRdsDbInstanceInput`](crate::input::RegisterRdsDbInstanceInput)
pub fn builder() -> crate::input::register_rds_db_instance_input::Builder {
crate::input::register_rds_db_instance_input::Builder::default()
}
}
/// See [`RegisterVolumeInput`](crate::input::RegisterVolumeInput)
pub mod register_volume_input {
/// A builder for [`RegisterVolumeInput`](crate::input::RegisterVolumeInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) ec2_volume_id: std::option::Option<std::string::String>,
pub(crate) stack_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The Amazon EBS volume ID.</p>
pub fn ec2_volume_id(mut self, input: impl Into<std::string::String>) -> Self {
self.ec2_volume_id = Some(input.into());
self
}
/// <p>The Amazon EBS volume ID.</p>
pub fn set_ec2_volume_id(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.ec2_volume_id = input;
self
}
/// <p>The stack ID.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// Consumes the builder and constructs a [`RegisterVolumeInput`](crate::input::RegisterVolumeInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::RegisterVolumeInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::RegisterVolumeInput {
ec2_volume_id: self.ec2_volume_id,
stack_id: self.stack_id,
})
}
}
}
#[doc(hidden)]
pub type RegisterVolumeInputOperationOutputAlias = crate::operation::RegisterVolume;
#[doc(hidden)]
pub type RegisterVolumeInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl RegisterVolumeInput {
/// Consumes the builder and constructs an Operation<[`RegisterVolume`](crate::operation::RegisterVolume)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::RegisterVolume,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::RegisterVolumeInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::RegisterVolumeInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::RegisterVolumeInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.RegisterVolume",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_register_volume(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::RegisterVolume::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"RegisterVolume",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`RegisterVolumeInput`](crate::input::RegisterVolumeInput)
pub fn builder() -> crate::input::register_volume_input::Builder {
crate::input::register_volume_input::Builder::default()
}
}
/// See [`SetLoadBasedAutoScalingInput`](crate::input::SetLoadBasedAutoScalingInput)
pub mod set_load_based_auto_scaling_input {
/// A builder for [`SetLoadBasedAutoScalingInput`](crate::input::SetLoadBasedAutoScalingInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) layer_id: std::option::Option<std::string::String>,
pub(crate) enable: std::option::Option<bool>,
pub(crate) up_scaling: std::option::Option<crate::model::AutoScalingThresholds>,
pub(crate) down_scaling: std::option::Option<crate::model::AutoScalingThresholds>,
}
impl Builder {
/// <p>The layer ID.</p>
pub fn layer_id(mut self, input: impl Into<std::string::String>) -> Self {
self.layer_id = Some(input.into());
self
}
/// <p>The layer ID.</p>
pub fn set_layer_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.layer_id = input;
self
}
/// <p>Enables load-based auto scaling for the layer.</p>
pub fn enable(mut self, input: bool) -> Self {
self.enable = Some(input);
self
}
/// <p>Enables load-based auto scaling for the layer.</p>
pub fn set_enable(mut self, input: std::option::Option<bool>) -> Self {
self.enable = input;
self
}
/// <p>An <code>AutoScalingThresholds</code> object with the upscaling threshold configuration. If
/// the load exceeds these thresholds for a specified amount of time, AWS OpsWorks Stacks starts a specified
/// number of instances.</p>
pub fn up_scaling(mut self, input: crate::model::AutoScalingThresholds) -> Self {
self.up_scaling = Some(input);
self
}
/// <p>An <code>AutoScalingThresholds</code> object with the upscaling threshold configuration. If
/// the load exceeds these thresholds for a specified amount of time, AWS OpsWorks Stacks starts a specified
/// number of instances.</p>
pub fn set_up_scaling(
mut self,
input: std::option::Option<crate::model::AutoScalingThresholds>,
) -> Self {
self.up_scaling = input;
self
}
/// <p>An <code>AutoScalingThresholds</code> object with the downscaling threshold configuration. If
/// the load falls below these thresholds for a specified amount of time, AWS OpsWorks Stacks stops a specified
/// number of instances.</p>
pub fn down_scaling(mut self, input: crate::model::AutoScalingThresholds) -> Self {
self.down_scaling = Some(input);
self
}
/// <p>An <code>AutoScalingThresholds</code> object with the downscaling threshold configuration. If
/// the load falls below these thresholds for a specified amount of time, AWS OpsWorks Stacks stops a specified
/// number of instances.</p>
pub fn set_down_scaling(
mut self,
input: std::option::Option<crate::model::AutoScalingThresholds>,
) -> Self {
self.down_scaling = input;
self
}
/// Consumes the builder and constructs a [`SetLoadBasedAutoScalingInput`](crate::input::SetLoadBasedAutoScalingInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::SetLoadBasedAutoScalingInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::SetLoadBasedAutoScalingInput {
layer_id: self.layer_id,
enable: self.enable,
up_scaling: self.up_scaling,
down_scaling: self.down_scaling,
})
}
}
}
#[doc(hidden)]
pub type SetLoadBasedAutoScalingInputOperationOutputAlias =
crate::operation::SetLoadBasedAutoScaling;
#[doc(hidden)]
pub type SetLoadBasedAutoScalingInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl SetLoadBasedAutoScalingInput {
/// Consumes the builder and constructs an Operation<[`SetLoadBasedAutoScaling`](crate::operation::SetLoadBasedAutoScaling)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::SetLoadBasedAutoScaling,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::SetLoadBasedAutoScalingInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::SetLoadBasedAutoScalingInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::SetLoadBasedAutoScalingInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.SetLoadBasedAutoScaling",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_set_load_based_auto_scaling(
&self,
)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::SetLoadBasedAutoScaling::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"SetLoadBasedAutoScaling",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`SetLoadBasedAutoScalingInput`](crate::input::SetLoadBasedAutoScalingInput)
pub fn builder() -> crate::input::set_load_based_auto_scaling_input::Builder {
crate::input::set_load_based_auto_scaling_input::Builder::default()
}
}
/// See [`SetPermissionInput`](crate::input::SetPermissionInput)
pub mod set_permission_input {
/// A builder for [`SetPermissionInput`](crate::input::SetPermissionInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) iam_user_arn: std::option::Option<std::string::String>,
pub(crate) allow_ssh: std::option::Option<bool>,
pub(crate) allow_sudo: std::option::Option<bool>,
pub(crate) level: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The stack ID.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// <p>The user's IAM ARN. This can also be a federated user's ARN.</p>
pub fn iam_user_arn(mut self, input: impl Into<std::string::String>) -> Self {
self.iam_user_arn = Some(input.into());
self
}
/// <p>The user's IAM ARN. This can also be a federated user's ARN.</p>
pub fn set_iam_user_arn(mut self, input: std::option::Option<std::string::String>) -> Self {
self.iam_user_arn = input;
self
}
/// <p>The user is allowed to use SSH to communicate with the instance.</p>
pub fn allow_ssh(mut self, input: bool) -> Self {
self.allow_ssh = Some(input);
self
}
/// <p>The user is allowed to use SSH to communicate with the instance.</p>
pub fn set_allow_ssh(mut self, input: std::option::Option<bool>) -> Self {
self.allow_ssh = input;
self
}
/// <p>The user is allowed to use <b>sudo</b> to elevate privileges.</p>
pub fn allow_sudo(mut self, input: bool) -> Self {
self.allow_sudo = Some(input);
self
}
/// <p>The user is allowed to use <b>sudo</b> to elevate privileges.</p>
pub fn set_allow_sudo(mut self, input: std::option::Option<bool>) -> Self {
self.allow_sudo = input;
self
}
/// <p>The user's permission level, which must be set to one of the following strings. You cannot set your own permissions level.</p>
/// <ul>
/// <li>
/// <p>
/// <code>deny</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>show</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>deploy</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>manage</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>iam_only</code>
/// </p>
/// </li>
/// </ul>
/// <p>For more information about the permissions associated with these levels, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html">Managing User Permissions</a>.</p>
pub fn level(mut self, input: impl Into<std::string::String>) -> Self {
self.level = Some(input.into());
self
}
/// <p>The user's permission level, which must be set to one of the following strings. You cannot set your own permissions level.</p>
/// <ul>
/// <li>
/// <p>
/// <code>deny</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>show</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>deploy</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>manage</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>iam_only</code>
/// </p>
/// </li>
/// </ul>
/// <p>For more information about the permissions associated with these levels, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html">Managing User Permissions</a>.</p>
pub fn set_level(mut self, input: std::option::Option<std::string::String>) -> Self {
self.level = input;
self
}
/// Consumes the builder and constructs a [`SetPermissionInput`](crate::input::SetPermissionInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::SetPermissionInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::SetPermissionInput {
stack_id: self.stack_id,
iam_user_arn: self.iam_user_arn,
allow_ssh: self.allow_ssh,
allow_sudo: self.allow_sudo,
level: self.level,
})
}
}
}
#[doc(hidden)]
pub type SetPermissionInputOperationOutputAlias = crate::operation::SetPermission;
#[doc(hidden)]
pub type SetPermissionInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl SetPermissionInput {
/// Consumes the builder and constructs an Operation<[`SetPermission`](crate::operation::SetPermission)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::SetPermission,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::SetPermissionInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::SetPermissionInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::SetPermissionInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.SetPermission",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_set_permission(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::SetPermission::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"SetPermission",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`SetPermissionInput`](crate::input::SetPermissionInput)
pub fn builder() -> crate::input::set_permission_input::Builder {
crate::input::set_permission_input::Builder::default()
}
}
/// See [`SetTimeBasedAutoScalingInput`](crate::input::SetTimeBasedAutoScalingInput)
pub mod set_time_based_auto_scaling_input {
/// A builder for [`SetTimeBasedAutoScalingInput`](crate::input::SetTimeBasedAutoScalingInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) instance_id: std::option::Option<std::string::String>,
pub(crate) auto_scaling_schedule:
std::option::Option<crate::model::WeeklyAutoScalingSchedule>,
}
impl Builder {
/// <p>The instance ID.</p>
pub fn instance_id(mut self, input: impl Into<std::string::String>) -> Self {
self.instance_id = Some(input.into());
self
}
/// <p>The instance ID.</p>
pub fn set_instance_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.instance_id = input;
self
}
/// <p>An <code>AutoScalingSchedule</code> with the instance schedule.</p>
pub fn auto_scaling_schedule(
mut self,
input: crate::model::WeeklyAutoScalingSchedule,
) -> Self {
self.auto_scaling_schedule = Some(input);
self
}
/// <p>An <code>AutoScalingSchedule</code> with the instance schedule.</p>
pub fn set_auto_scaling_schedule(
mut self,
input: std::option::Option<crate::model::WeeklyAutoScalingSchedule>,
) -> Self {
self.auto_scaling_schedule = input;
self
}
/// Consumes the builder and constructs a [`SetTimeBasedAutoScalingInput`](crate::input::SetTimeBasedAutoScalingInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::SetTimeBasedAutoScalingInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::SetTimeBasedAutoScalingInput {
instance_id: self.instance_id,
auto_scaling_schedule: self.auto_scaling_schedule,
})
}
}
}
#[doc(hidden)]
pub type SetTimeBasedAutoScalingInputOperationOutputAlias =
crate::operation::SetTimeBasedAutoScaling;
#[doc(hidden)]
pub type SetTimeBasedAutoScalingInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl SetTimeBasedAutoScalingInput {
/// Consumes the builder and constructs an Operation<[`SetTimeBasedAutoScaling`](crate::operation::SetTimeBasedAutoScaling)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::SetTimeBasedAutoScaling,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::SetTimeBasedAutoScalingInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::SetTimeBasedAutoScalingInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::SetTimeBasedAutoScalingInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.SetTimeBasedAutoScaling",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_set_time_based_auto_scaling(
&self,
)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::SetTimeBasedAutoScaling::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"SetTimeBasedAutoScaling",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`SetTimeBasedAutoScalingInput`](crate::input::SetTimeBasedAutoScalingInput)
pub fn builder() -> crate::input::set_time_based_auto_scaling_input::Builder {
crate::input::set_time_based_auto_scaling_input::Builder::default()
}
}
/// See [`StartInstanceInput`](crate::input::StartInstanceInput)
pub mod start_instance_input {
/// A builder for [`StartInstanceInput`](crate::input::StartInstanceInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) instance_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The instance ID.</p>
pub fn instance_id(mut self, input: impl Into<std::string::String>) -> Self {
self.instance_id = Some(input.into());
self
}
/// <p>The instance ID.</p>
pub fn set_instance_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.instance_id = input;
self
}
/// Consumes the builder and constructs a [`StartInstanceInput`](crate::input::StartInstanceInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::StartInstanceInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::StartInstanceInput {
instance_id: self.instance_id,
})
}
}
}
#[doc(hidden)]
pub type StartInstanceInputOperationOutputAlias = crate::operation::StartInstance;
#[doc(hidden)]
pub type StartInstanceInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl StartInstanceInput {
/// Consumes the builder and constructs an Operation<[`StartInstance`](crate::operation::StartInstance)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::StartInstance,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::StartInstanceInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::StartInstanceInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::StartInstanceInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.StartInstance",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_start_instance(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::StartInstance::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"StartInstance",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`StartInstanceInput`](crate::input::StartInstanceInput)
pub fn builder() -> crate::input::start_instance_input::Builder {
crate::input::start_instance_input::Builder::default()
}
}
/// See [`StartStackInput`](crate::input::StartStackInput)
pub mod start_stack_input {
/// A builder for [`StartStackInput`](crate::input::StartStackInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The stack ID.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// Consumes the builder and constructs a [`StartStackInput`](crate::input::StartStackInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::StartStackInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::StartStackInput {
stack_id: self.stack_id,
})
}
}
}
#[doc(hidden)]
pub type StartStackInputOperationOutputAlias = crate::operation::StartStack;
#[doc(hidden)]
pub type StartStackInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl StartStackInput {
/// Consumes the builder and constructs an Operation<[`StartStack`](crate::operation::StartStack)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::StartStack,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::StartStackInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::StartStackInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::StartStackInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.StartStack",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_start_stack(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::StartStack::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"StartStack",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`StartStackInput`](crate::input::StartStackInput)
pub fn builder() -> crate::input::start_stack_input::Builder {
crate::input::start_stack_input::Builder::default()
}
}
/// See [`StopInstanceInput`](crate::input::StopInstanceInput)
pub mod stop_instance_input {
/// A builder for [`StopInstanceInput`](crate::input::StopInstanceInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) instance_id: std::option::Option<std::string::String>,
pub(crate) force: std::option::Option<bool>,
}
impl Builder {
/// <p>The instance ID.</p>
pub fn instance_id(mut self, input: impl Into<std::string::String>) -> Self {
self.instance_id = Some(input.into());
self
}
/// <p>The instance ID.</p>
pub fn set_instance_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.instance_id = input;
self
}
/// <p>Specifies whether to force an instance to stop. If the instance's root device type is <code>ebs</code>, or EBS-backed,
/// adding the <code>Force</code> parameter to the <code>StopInstances</code> API call disassociates the AWS OpsWorks Stacks instance from EC2, and forces deletion of <i>only</i> the OpsWorks Stacks instance.
/// You must also delete the formerly-associated instance in EC2 after troubleshooting and replacing the AWS OpsWorks Stacks instance with a new one.</p>
pub fn force(mut self, input: bool) -> Self {
self.force = Some(input);
self
}
/// <p>Specifies whether to force an instance to stop. If the instance's root device type is <code>ebs</code>, or EBS-backed,
/// adding the <code>Force</code> parameter to the <code>StopInstances</code> API call disassociates the AWS OpsWorks Stacks instance from EC2, and forces deletion of <i>only</i> the OpsWorks Stacks instance.
/// You must also delete the formerly-associated instance in EC2 after troubleshooting and replacing the AWS OpsWorks Stacks instance with a new one.</p>
pub fn set_force(mut self, input: std::option::Option<bool>) -> Self {
self.force = input;
self
}
/// Consumes the builder and constructs a [`StopInstanceInput`](crate::input::StopInstanceInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::StopInstanceInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::StopInstanceInput {
instance_id: self.instance_id,
force: self.force,
})
}
}
}
#[doc(hidden)]
pub type StopInstanceInputOperationOutputAlias = crate::operation::StopInstance;
#[doc(hidden)]
pub type StopInstanceInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl StopInstanceInput {
/// Consumes the builder and constructs an Operation<[`StopInstance`](crate::operation::StopInstance)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::StopInstance,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::StopInstanceInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::StopInstanceInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::StopInstanceInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.StopInstance",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_stop_instance(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::StopInstance::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"StopInstance",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`StopInstanceInput`](crate::input::StopInstanceInput)
pub fn builder() -> crate::input::stop_instance_input::Builder {
crate::input::stop_instance_input::Builder::default()
}
}
/// See [`StopStackInput`](crate::input::StopStackInput)
pub mod stop_stack_input {
/// A builder for [`StopStackInput`](crate::input::StopStackInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The stack ID.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// Consumes the builder and constructs a [`StopStackInput`](crate::input::StopStackInput)
pub fn build(
self,
) -> std::result::Result<crate::input::StopStackInput, aws_smithy_http::operation::BuildError>
{
Ok(crate::input::StopStackInput {
stack_id: self.stack_id,
})
}
}
}
#[doc(hidden)]
pub type StopStackInputOperationOutputAlias = crate::operation::StopStack;
#[doc(hidden)]
pub type StopStackInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl StopStackInput {
/// Consumes the builder and constructs an Operation<[`StopStack`](crate::operation::StopStack)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::StopStack,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::StopStackInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::StopStackInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::StopStackInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.StopStack",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_stop_stack(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op =
aws_smithy_http::operation::Operation::new(request, crate::operation::StopStack::new())
.with_metadata(aws_smithy_http::operation::Metadata::new(
"StopStack",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`StopStackInput`](crate::input::StopStackInput)
pub fn builder() -> crate::input::stop_stack_input::Builder {
crate::input::stop_stack_input::Builder::default()
}
}
/// See [`TagResourceInput`](crate::input::TagResourceInput)
pub mod tag_resource_input {
/// A builder for [`TagResourceInput`](crate::input::TagResourceInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) resource_arn: std::option::Option<std::string::String>,
pub(crate) tags: std::option::Option<
std::collections::HashMap<std::string::String, std::string::String>,
>,
}
impl Builder {
/// <p>The stack or layer's Amazon Resource Number (ARN).</p>
pub fn resource_arn(mut self, input: impl Into<std::string::String>) -> Self {
self.resource_arn = Some(input.into());
self
}
/// <p>The stack or layer's Amazon Resource Number (ARN).</p>
pub fn set_resource_arn(mut self, input: std::option::Option<std::string::String>) -> Self {
self.resource_arn = input;
self
}
/// Adds a key-value pair to `tags`.
///
/// To override the contents of this collection use [`set_tags`](Self::set_tags).
///
/// <p>A map that contains tag keys and tag values that are attached to a stack or layer.</p>
/// <ul>
/// <li>
/// <p>The key cannot be empty.</p>
/// </li>
/// <li>
/// <p>The key can be a maximum of 127 characters, and can contain only Unicode letters, numbers, or separators, or the following special characters: <code>+ - = . _ : /</code>
/// </p>
/// </li>
/// <li>
/// <p>The value can be a maximum 255 characters, and contain only Unicode letters, numbers, or separators, or the following special characters: <code>+ - = . _ : /</code>
/// </p>
/// </li>
/// <li>
/// <p>Leading and trailing white spaces are trimmed from both the key and value.</p>
/// </li>
/// <li>
/// <p>A maximum of 40 tags is allowed for any resource.</p>
/// </li>
/// </ul>
pub fn tags(
mut self,
k: impl Into<std::string::String>,
v: impl Into<std::string::String>,
) -> Self {
let mut hash_map = self.tags.unwrap_or_default();
hash_map.insert(k.into(), v.into());
self.tags = Some(hash_map);
self
}
/// <p>A map that contains tag keys and tag values that are attached to a stack or layer.</p>
/// <ul>
/// <li>
/// <p>The key cannot be empty.</p>
/// </li>
/// <li>
/// <p>The key can be a maximum of 127 characters, and can contain only Unicode letters, numbers, or separators, or the following special characters: <code>+ - = . _ : /</code>
/// </p>
/// </li>
/// <li>
/// <p>The value can be a maximum 255 characters, and contain only Unicode letters, numbers, or separators, or the following special characters: <code>+ - = . _ : /</code>
/// </p>
/// </li>
/// <li>
/// <p>Leading and trailing white spaces are trimmed from both the key and value.</p>
/// </li>
/// <li>
/// <p>A maximum of 40 tags is allowed for any resource.</p>
/// </li>
/// </ul>
pub fn set_tags(
mut self,
input: std::option::Option<
std::collections::HashMap<std::string::String, std::string::String>,
>,
) -> Self {
self.tags = input;
self
}
/// Consumes the builder and constructs a [`TagResourceInput`](crate::input::TagResourceInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::TagResourceInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::TagResourceInput {
resource_arn: self.resource_arn,
tags: self.tags,
})
}
}
}
#[doc(hidden)]
pub type TagResourceInputOperationOutputAlias = crate::operation::TagResource;
#[doc(hidden)]
pub type TagResourceInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl TagResourceInput {
/// Consumes the builder and constructs an Operation<[`TagResource`](crate::operation::TagResource)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::TagResource,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::TagResourceInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::TagResourceInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::TagResourceInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.TagResource",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_tag_resource(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::TagResource::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"TagResource",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`TagResourceInput`](crate::input::TagResourceInput)
pub fn builder() -> crate::input::tag_resource_input::Builder {
crate::input::tag_resource_input::Builder::default()
}
}
/// See [`UnassignInstanceInput`](crate::input::UnassignInstanceInput)
pub mod unassign_instance_input {
/// A builder for [`UnassignInstanceInput`](crate::input::UnassignInstanceInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) instance_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The instance ID.</p>
pub fn instance_id(mut self, input: impl Into<std::string::String>) -> Self {
self.instance_id = Some(input.into());
self
}
/// <p>The instance ID.</p>
pub fn set_instance_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.instance_id = input;
self
}
/// Consumes the builder and constructs a [`UnassignInstanceInput`](crate::input::UnassignInstanceInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::UnassignInstanceInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::UnassignInstanceInput {
instance_id: self.instance_id,
})
}
}
}
#[doc(hidden)]
pub type UnassignInstanceInputOperationOutputAlias = crate::operation::UnassignInstance;
#[doc(hidden)]
pub type UnassignInstanceInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl UnassignInstanceInput {
/// Consumes the builder and constructs an Operation<[`UnassignInstance`](crate::operation::UnassignInstance)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::UnassignInstance,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::UnassignInstanceInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::UnassignInstanceInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::UnassignInstanceInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.UnassignInstance",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_unassign_instance(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::UnassignInstance::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"UnassignInstance",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`UnassignInstanceInput`](crate::input::UnassignInstanceInput)
pub fn builder() -> crate::input::unassign_instance_input::Builder {
crate::input::unassign_instance_input::Builder::default()
}
}
/// See [`UnassignVolumeInput`](crate::input::UnassignVolumeInput)
pub mod unassign_volume_input {
/// A builder for [`UnassignVolumeInput`](crate::input::UnassignVolumeInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) volume_id: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The volume ID.</p>
pub fn volume_id(mut self, input: impl Into<std::string::String>) -> Self {
self.volume_id = Some(input.into());
self
}
/// <p>The volume ID.</p>
pub fn set_volume_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.volume_id = input;
self
}
/// Consumes the builder and constructs a [`UnassignVolumeInput`](crate::input::UnassignVolumeInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::UnassignVolumeInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::UnassignVolumeInput {
volume_id: self.volume_id,
})
}
}
}
#[doc(hidden)]
pub type UnassignVolumeInputOperationOutputAlias = crate::operation::UnassignVolume;
#[doc(hidden)]
pub type UnassignVolumeInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl UnassignVolumeInput {
/// Consumes the builder and constructs an Operation<[`UnassignVolume`](crate::operation::UnassignVolume)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::UnassignVolume,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::UnassignVolumeInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::UnassignVolumeInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::UnassignVolumeInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.UnassignVolume",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_unassign_volume(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::UnassignVolume::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"UnassignVolume",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`UnassignVolumeInput`](crate::input::UnassignVolumeInput)
pub fn builder() -> crate::input::unassign_volume_input::Builder {
crate::input::unassign_volume_input::Builder::default()
}
}
/// See [`UntagResourceInput`](crate::input::UntagResourceInput)
pub mod untag_resource_input {
/// A builder for [`UntagResourceInput`](crate::input::UntagResourceInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) resource_arn: std::option::Option<std::string::String>,
pub(crate) tag_keys: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl Builder {
/// <p>The stack or layer's Amazon Resource Number (ARN).</p>
pub fn resource_arn(mut self, input: impl Into<std::string::String>) -> Self {
self.resource_arn = Some(input.into());
self
}
/// <p>The stack or layer's Amazon Resource Number (ARN).</p>
pub fn set_resource_arn(mut self, input: std::option::Option<std::string::String>) -> Self {
self.resource_arn = input;
self
}
/// Appends an item to `tag_keys`.
///
/// To override the contents of this collection use [`set_tag_keys`](Self::set_tag_keys).
///
/// <p>A list of the keys of tags to be removed from a stack or layer.</p>
pub fn tag_keys(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.tag_keys.unwrap_or_default();
v.push(input.into());
self.tag_keys = Some(v);
self
}
/// <p>A list of the keys of tags to be removed from a stack or layer.</p>
pub fn set_tag_keys(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.tag_keys = input;
self
}
/// Consumes the builder and constructs a [`UntagResourceInput`](crate::input::UntagResourceInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::UntagResourceInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::UntagResourceInput {
resource_arn: self.resource_arn,
tag_keys: self.tag_keys,
})
}
}
}
#[doc(hidden)]
pub type UntagResourceInputOperationOutputAlias = crate::operation::UntagResource;
#[doc(hidden)]
pub type UntagResourceInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl UntagResourceInput {
/// Consumes the builder and constructs an Operation<[`UntagResource`](crate::operation::UntagResource)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::UntagResource,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::UntagResourceInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::UntagResourceInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::UntagResourceInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.UntagResource",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_untag_resource(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::UntagResource::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"UntagResource",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`UntagResourceInput`](crate::input::UntagResourceInput)
pub fn builder() -> crate::input::untag_resource_input::Builder {
crate::input::untag_resource_input::Builder::default()
}
}
/// See [`UpdateAppInput`](crate::input::UpdateAppInput)
pub mod update_app_input {
/// A builder for [`UpdateAppInput`](crate::input::UpdateAppInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) app_id: std::option::Option<std::string::String>,
pub(crate) name: std::option::Option<std::string::String>,
pub(crate) description: std::option::Option<std::string::String>,
pub(crate) data_sources: std::option::Option<std::vec::Vec<crate::model::DataSource>>,
pub(crate) r#type: std::option::Option<crate::model::AppType>,
pub(crate) app_source: std::option::Option<crate::model::Source>,
pub(crate) domains: std::option::Option<std::vec::Vec<std::string::String>>,
pub(crate) enable_ssl: std::option::Option<bool>,
pub(crate) ssl_configuration: std::option::Option<crate::model::SslConfiguration>,
pub(crate) attributes: std::option::Option<
std::collections::HashMap<crate::model::AppAttributesKeys, std::string::String>,
>,
pub(crate) environment:
std::option::Option<std::vec::Vec<crate::model::EnvironmentVariable>>,
}
impl Builder {
/// <p>The app ID.</p>
pub fn app_id(mut self, input: impl Into<std::string::String>) -> Self {
self.app_id = Some(input.into());
self
}
/// <p>The app ID.</p>
pub fn set_app_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.app_id = input;
self
}
/// <p>The app name.</p>
pub fn name(mut self, input: impl Into<std::string::String>) -> Self {
self.name = Some(input.into());
self
}
/// <p>The app name.</p>
pub fn set_name(mut self, input: std::option::Option<std::string::String>) -> Self {
self.name = input;
self
}
/// <p>A description of the app.</p>
pub fn description(mut self, input: impl Into<std::string::String>) -> Self {
self.description = Some(input.into());
self
}
/// <p>A description of the app.</p>
pub fn set_description(mut self, input: std::option::Option<std::string::String>) -> Self {
self.description = input;
self
}
/// Appends an item to `data_sources`.
///
/// To override the contents of this collection use [`set_data_sources`](Self::set_data_sources).
///
/// <p>The app's data sources.</p>
pub fn data_sources(mut self, input: impl Into<crate::model::DataSource>) -> Self {
let mut v = self.data_sources.unwrap_or_default();
v.push(input.into());
self.data_sources = Some(v);
self
}
/// <p>The app's data sources.</p>
pub fn set_data_sources(
mut self,
input: std::option::Option<std::vec::Vec<crate::model::DataSource>>,
) -> Self {
self.data_sources = input;
self
}
/// <p>The app type.</p>
pub fn r#type(mut self, input: crate::model::AppType) -> Self {
self.r#type = Some(input);
self
}
/// <p>The app type.</p>
pub fn set_type(mut self, input: std::option::Option<crate::model::AppType>) -> Self {
self.r#type = input;
self
}
/// <p>A <code>Source</code> object that specifies the app repository.</p>
pub fn app_source(mut self, input: crate::model::Source) -> Self {
self.app_source = Some(input);
self
}
/// <p>A <code>Source</code> object that specifies the app repository.</p>
pub fn set_app_source(mut self, input: std::option::Option<crate::model::Source>) -> Self {
self.app_source = input;
self
}
/// Appends an item to `domains`.
///
/// To override the contents of this collection use [`set_domains`](Self::set_domains).
///
/// <p>The app's virtual host settings, with multiple domains separated by commas. For example:
/// <code>'www.example.com, example.com'</code>
/// </p>
pub fn domains(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.domains.unwrap_or_default();
v.push(input.into());
self.domains = Some(v);
self
}
/// <p>The app's virtual host settings, with multiple domains separated by commas. For example:
/// <code>'www.example.com, example.com'</code>
/// </p>
pub fn set_domains(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.domains = input;
self
}
/// <p>Whether SSL is enabled for the app.</p>
pub fn enable_ssl(mut self, input: bool) -> Self {
self.enable_ssl = Some(input);
self
}
/// <p>Whether SSL is enabled for the app.</p>
pub fn set_enable_ssl(mut self, input: std::option::Option<bool>) -> Self {
self.enable_ssl = input;
self
}
/// <p>An <code>SslConfiguration</code> object with the SSL configuration.</p>
pub fn ssl_configuration(mut self, input: crate::model::SslConfiguration) -> Self {
self.ssl_configuration = Some(input);
self
}
/// <p>An <code>SslConfiguration</code> object with the SSL configuration.</p>
pub fn set_ssl_configuration(
mut self,
input: std::option::Option<crate::model::SslConfiguration>,
) -> Self {
self.ssl_configuration = input;
self
}
/// Adds a key-value pair to `attributes`.
///
/// To override the contents of this collection use [`set_attributes`](Self::set_attributes).
///
/// <p>One or more user-defined key/value pairs to be added to the stack attributes.</p>
pub fn attributes(
mut self,
k: impl Into<crate::model::AppAttributesKeys>,
v: impl Into<std::string::String>,
) -> Self {
let mut hash_map = self.attributes.unwrap_or_default();
hash_map.insert(k.into(), v.into());
self.attributes = Some(hash_map);
self
}
/// <p>One or more user-defined key/value pairs to be added to the stack attributes.</p>
pub fn set_attributes(
mut self,
input: std::option::Option<
std::collections::HashMap<crate::model::AppAttributesKeys, std::string::String>,
>,
) -> Self {
self.attributes = input;
self
}
/// Appends an item to `environment`.
///
/// To override the contents of this collection use [`set_environment`](Self::set_environment).
///
/// <p>An array of <code>EnvironmentVariable</code> objects that specify environment variables to be
/// associated with the app. After you deploy the app, these variables are defined on the
/// associated app server instances.For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html#workingapps-creating-environment"> Environment Variables</a>.</p>
/// <p>There is no specific limit on the number of environment variables. However, the size of the associated data structure - which includes the variables' names, values, and protected flag values - cannot exceed 20 KB. This limit should accommodate most if not all use cases. Exceeding it will cause an exception with the message, "Environment: is too large (maximum is 20 KB)."</p>
/// <note>
/// <p>If you have specified one or more environment variables, you cannot modify the stack's Chef version.</p>
/// </note>
pub fn environment(mut self, input: impl Into<crate::model::EnvironmentVariable>) -> Self {
let mut v = self.environment.unwrap_or_default();
v.push(input.into());
self.environment = Some(v);
self
}
/// <p>An array of <code>EnvironmentVariable</code> objects that specify environment variables to be
/// associated with the app. After you deploy the app, these variables are defined on the
/// associated app server instances.For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html#workingapps-creating-environment"> Environment Variables</a>.</p>
/// <p>There is no specific limit on the number of environment variables. However, the size of the associated data structure - which includes the variables' names, values, and protected flag values - cannot exceed 20 KB. This limit should accommodate most if not all use cases. Exceeding it will cause an exception with the message, "Environment: is too large (maximum is 20 KB)."</p>
/// <note>
/// <p>If you have specified one or more environment variables, you cannot modify the stack's Chef version.</p>
/// </note>
pub fn set_environment(
mut self,
input: std::option::Option<std::vec::Vec<crate::model::EnvironmentVariable>>,
) -> Self {
self.environment = input;
self
}
/// Consumes the builder and constructs a [`UpdateAppInput`](crate::input::UpdateAppInput)
pub fn build(
self,
) -> std::result::Result<crate::input::UpdateAppInput, aws_smithy_http::operation::BuildError>
{
Ok(crate::input::UpdateAppInput {
app_id: self.app_id,
name: self.name,
description: self.description,
data_sources: self.data_sources,
r#type: self.r#type,
app_source: self.app_source,
domains: self.domains,
enable_ssl: self.enable_ssl,
ssl_configuration: self.ssl_configuration,
attributes: self.attributes,
environment: self.environment,
})
}
}
}
#[doc(hidden)]
pub type UpdateAppInputOperationOutputAlias = crate::operation::UpdateApp;
#[doc(hidden)]
pub type UpdateAppInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl UpdateAppInput {
/// Consumes the builder and constructs an Operation<[`UpdateApp`](crate::operation::UpdateApp)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::UpdateApp,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::UpdateAppInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::UpdateAppInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::UpdateAppInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.UpdateApp",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_update_app(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op =
aws_smithy_http::operation::Operation::new(request, crate::operation::UpdateApp::new())
.with_metadata(aws_smithy_http::operation::Metadata::new(
"UpdateApp",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`UpdateAppInput`](crate::input::UpdateAppInput)
pub fn builder() -> crate::input::update_app_input::Builder {
crate::input::update_app_input::Builder::default()
}
}
/// See [`UpdateElasticIpInput`](crate::input::UpdateElasticIpInput)
pub mod update_elastic_ip_input {
/// A builder for [`UpdateElasticIpInput`](crate::input::UpdateElasticIpInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) elastic_ip: std::option::Option<std::string::String>,
pub(crate) name: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The IP address for which you want to update the name.</p>
pub fn elastic_ip(mut self, input: impl Into<std::string::String>) -> Self {
self.elastic_ip = Some(input.into());
self
}
/// <p>The IP address for which you want to update the name.</p>
pub fn set_elastic_ip(mut self, input: std::option::Option<std::string::String>) -> Self {
self.elastic_ip = input;
self
}
/// <p>The new name.</p>
pub fn name(mut self, input: impl Into<std::string::String>) -> Self {
self.name = Some(input.into());
self
}
/// <p>The new name.</p>
pub fn set_name(mut self, input: std::option::Option<std::string::String>) -> Self {
self.name = input;
self
}
/// Consumes the builder and constructs a [`UpdateElasticIpInput`](crate::input::UpdateElasticIpInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::UpdateElasticIpInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::UpdateElasticIpInput {
elastic_ip: self.elastic_ip,
name: self.name,
})
}
}
}
#[doc(hidden)]
pub type UpdateElasticIpInputOperationOutputAlias = crate::operation::UpdateElasticIp;
#[doc(hidden)]
pub type UpdateElasticIpInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl UpdateElasticIpInput {
/// Consumes the builder and constructs an Operation<[`UpdateElasticIp`](crate::operation::UpdateElasticIp)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::UpdateElasticIp,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::UpdateElasticIpInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::UpdateElasticIpInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::UpdateElasticIpInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.UpdateElasticIp",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_update_elastic_ip(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::UpdateElasticIp::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"UpdateElasticIp",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`UpdateElasticIpInput`](crate::input::UpdateElasticIpInput)
pub fn builder() -> crate::input::update_elastic_ip_input::Builder {
crate::input::update_elastic_ip_input::Builder::default()
}
}
/// See [`UpdateInstanceInput`](crate::input::UpdateInstanceInput)
pub mod update_instance_input {
/// A builder for [`UpdateInstanceInput`](crate::input::UpdateInstanceInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) instance_id: std::option::Option<std::string::String>,
pub(crate) layer_ids: std::option::Option<std::vec::Vec<std::string::String>>,
pub(crate) instance_type: std::option::Option<std::string::String>,
pub(crate) auto_scaling_type: std::option::Option<crate::model::AutoScalingType>,
pub(crate) hostname: std::option::Option<std::string::String>,
pub(crate) os: std::option::Option<std::string::String>,
pub(crate) ami_id: std::option::Option<std::string::String>,
pub(crate) ssh_key_name: std::option::Option<std::string::String>,
pub(crate) architecture: std::option::Option<crate::model::Architecture>,
pub(crate) install_updates_on_boot: std::option::Option<bool>,
pub(crate) ebs_optimized: std::option::Option<bool>,
pub(crate) agent_version: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The instance ID.</p>
pub fn instance_id(mut self, input: impl Into<std::string::String>) -> Self {
self.instance_id = Some(input.into());
self
}
/// <p>The instance ID.</p>
pub fn set_instance_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.instance_id = input;
self
}
/// Appends an item to `layer_ids`.
///
/// To override the contents of this collection use [`set_layer_ids`](Self::set_layer_ids).
///
/// <p>The instance's layer IDs.</p>
pub fn layer_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.layer_ids.unwrap_or_default();
v.push(input.into());
self.layer_ids = Some(v);
self
}
/// <p>The instance's layer IDs.</p>
pub fn set_layer_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.layer_ids = input;
self
}
/// <p>The instance type, such as <code>t2.micro</code>. For a list of supported instance types,
/// open the stack in the console, choose <b>Instances</b>, and choose <b>+ Instance</b>.
/// The <b>Size</b> list contains the currently supported types. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html">Instance
/// Families and Types</a>. The parameter values that you use to specify the various types are
/// in the <b>API Name</b> column of the <b>Available Instance Types</b> table.</p>
pub fn instance_type(mut self, input: impl Into<std::string::String>) -> Self {
self.instance_type = Some(input.into());
self
}
/// <p>The instance type, such as <code>t2.micro</code>. For a list of supported instance types,
/// open the stack in the console, choose <b>Instances</b>, and choose <b>+ Instance</b>.
/// The <b>Size</b> list contains the currently supported types. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html">Instance
/// Families and Types</a>. The parameter values that you use to specify the various types are
/// in the <b>API Name</b> column of the <b>Available Instance Types</b> table.</p>
pub fn set_instance_type(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.instance_type = input;
self
}
/// <p>For load-based or time-based instances, the type. Windows stacks can use only time-based instances.</p>
pub fn auto_scaling_type(mut self, input: crate::model::AutoScalingType) -> Self {
self.auto_scaling_type = Some(input);
self
}
/// <p>For load-based or time-based instances, the type. Windows stacks can use only time-based instances.</p>
pub fn set_auto_scaling_type(
mut self,
input: std::option::Option<crate::model::AutoScalingType>,
) -> Self {
self.auto_scaling_type = input;
self
}
/// <p>The instance host name.</p>
pub fn hostname(mut self, input: impl Into<std::string::String>) -> Self {
self.hostname = Some(input.into());
self
}
/// <p>The instance host name.</p>
pub fn set_hostname(mut self, input: std::option::Option<std::string::String>) -> Self {
self.hostname = input;
self
}
/// <p>The instance's operating system, which must be set to one of the following. You cannot update an instance that is using a custom AMI.</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux 2016.09</code>, <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux
/// 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>A supported Windows operating system, such as <code>Microsoft Windows Server 2012 R2 Base</code>, <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// </ul>
/// <p>For more information about supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
/// <p>The default option is the current Amazon Linux version. If you set this parameter to
/// <code>Custom</code>, you must use the AmiId parameter to
/// specify the custom AMI that you want to use. For more information about supported operating
/// systems, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">Operating Systems</a>. For more information about how to use custom AMIs with OpsWorks, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using
/// Custom AMIs</a>.</p>
/// <note>
/// <p>You can specify a different Linux operating system for the updated stack, but you cannot change from Linux to Windows or Windows to Linux.</p>
/// </note>
pub fn os(mut self, input: impl Into<std::string::String>) -> Self {
self.os = Some(input.into());
self
}
/// <p>The instance's operating system, which must be set to one of the following. You cannot update an instance that is using a custom AMI.</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux 2016.09</code>, <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux
/// 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>A supported Windows operating system, such as <code>Microsoft Windows Server 2012 R2 Base</code>, <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// </ul>
/// <p>For more information about supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
/// <p>The default option is the current Amazon Linux version. If you set this parameter to
/// <code>Custom</code>, you must use the AmiId parameter to
/// specify the custom AMI that you want to use. For more information about supported operating
/// systems, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">Operating Systems</a>. For more information about how to use custom AMIs with OpsWorks, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using
/// Custom AMIs</a>.</p>
/// <note>
/// <p>You can specify a different Linux operating system for the updated stack, but you cannot change from Linux to Windows or Windows to Linux.</p>
/// </note>
pub fn set_os(mut self, input: std::option::Option<std::string::String>) -> Self {
self.os = input;
self
}
/// <p>The ID of the AMI that was used to create the instance. The value of this parameter must be the same AMI ID that the instance is already using.
/// You cannot apply a new AMI to an instance by running UpdateInstance. UpdateInstance does not work on instances that are using custom AMIs.
/// </p>
pub fn ami_id(mut self, input: impl Into<std::string::String>) -> Self {
self.ami_id = Some(input.into());
self
}
/// <p>The ID of the AMI that was used to create the instance. The value of this parameter must be the same AMI ID that the instance is already using.
/// You cannot apply a new AMI to an instance by running UpdateInstance. UpdateInstance does not work on instances that are using custom AMIs.
/// </p>
pub fn set_ami_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.ami_id = input;
self
}
/// <p>The instance's Amazon EC2 key name.</p>
pub fn ssh_key_name(mut self, input: impl Into<std::string::String>) -> Self {
self.ssh_key_name = Some(input.into());
self
}
/// <p>The instance's Amazon EC2 key name.</p>
pub fn set_ssh_key_name(mut self, input: std::option::Option<std::string::String>) -> Self {
self.ssh_key_name = input;
self
}
/// <p>The instance architecture. Instance types do not necessarily support both architectures. For
/// a list of the architectures that are supported by the different instance types, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html">Instance
/// Families and Types</a>.</p>
pub fn architecture(mut self, input: crate::model::Architecture) -> Self {
self.architecture = Some(input);
self
}
/// <p>The instance architecture. Instance types do not necessarily support both architectures. For
/// a list of the architectures that are supported by the different instance types, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html">Instance
/// Families and Types</a>.</p>
pub fn set_architecture(
mut self,
input: std::option::Option<crate::model::Architecture>,
) -> Self {
self.architecture = input;
self
}
/// <p>Whether to install operating system and package updates when the instance boots. The default
/// value is <code>true</code>. To control when updates are installed, set this value to
/// <code>false</code>. You must then update your instances manually by using
/// <a>CreateDeployment</a> to run the <code>update_dependencies</code> stack command or
/// by manually running <code>yum</code> (Amazon Linux) or <code>apt-get</code> (Ubuntu) on the
/// instances. </p>
/// <note>
/// <p>We strongly recommend using the default value of <code>true</code>, to ensure that your
/// instances have the latest security updates.</p>
/// </note>
pub fn install_updates_on_boot(mut self, input: bool) -> Self {
self.install_updates_on_boot = Some(input);
self
}
/// <p>Whether to install operating system and package updates when the instance boots. The default
/// value is <code>true</code>. To control when updates are installed, set this value to
/// <code>false</code>. You must then update your instances manually by using
/// <a>CreateDeployment</a> to run the <code>update_dependencies</code> stack command or
/// by manually running <code>yum</code> (Amazon Linux) or <code>apt-get</code> (Ubuntu) on the
/// instances. </p>
/// <note>
/// <p>We strongly recommend using the default value of <code>true</code>, to ensure that your
/// instances have the latest security updates.</p>
/// </note>
pub fn set_install_updates_on_boot(mut self, input: std::option::Option<bool>) -> Self {
self.install_updates_on_boot = input;
self
}
/// <p>This property cannot be updated.</p>
pub fn ebs_optimized(mut self, input: bool) -> Self {
self.ebs_optimized = Some(input);
self
}
/// <p>This property cannot be updated.</p>
pub fn set_ebs_optimized(mut self, input: std::option::Option<bool>) -> Self {
self.ebs_optimized = input;
self
}
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>
/// <code>INHERIT</code> - Use the stack's default agent version setting.</p>
/// </li>
/// <li>
/// <p>
/// <i>version_number</i> - Use the specified agent version.
/// This value overrides the stack's default setting.
/// To update the agent version, you must edit the instance configuration and specify a
/// new version.
/// AWS OpsWorks Stacks then automatically installs that version on the instance.</p>
/// </li>
/// </ul>
/// <p>The default setting is <code>INHERIT</code>. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>.</p>
/// <p>AgentVersion cannot be set to Chef 12.2.</p>
pub fn agent_version(mut self, input: impl Into<std::string::String>) -> Self {
self.agent_version = Some(input.into());
self
}
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>
/// <code>INHERIT</code> - Use the stack's default agent version setting.</p>
/// </li>
/// <li>
/// <p>
/// <i>version_number</i> - Use the specified agent version.
/// This value overrides the stack's default setting.
/// To update the agent version, you must edit the instance configuration and specify a
/// new version.
/// AWS OpsWorks Stacks then automatically installs that version on the instance.</p>
/// </li>
/// </ul>
/// <p>The default setting is <code>INHERIT</code>. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>.</p>
/// <p>AgentVersion cannot be set to Chef 12.2.</p>
pub fn set_agent_version(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.agent_version = input;
self
}
/// Consumes the builder and constructs a [`UpdateInstanceInput`](crate::input::UpdateInstanceInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::UpdateInstanceInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::UpdateInstanceInput {
instance_id: self.instance_id,
layer_ids: self.layer_ids,
instance_type: self.instance_type,
auto_scaling_type: self.auto_scaling_type,
hostname: self.hostname,
os: self.os,
ami_id: self.ami_id,
ssh_key_name: self.ssh_key_name,
architecture: self.architecture,
install_updates_on_boot: self.install_updates_on_boot,
ebs_optimized: self.ebs_optimized,
agent_version: self.agent_version,
})
}
}
}
#[doc(hidden)]
pub type UpdateInstanceInputOperationOutputAlias = crate::operation::UpdateInstance;
#[doc(hidden)]
pub type UpdateInstanceInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl UpdateInstanceInput {
/// Consumes the builder and constructs an Operation<[`UpdateInstance`](crate::operation::UpdateInstance)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::UpdateInstance,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::UpdateInstanceInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::UpdateInstanceInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::UpdateInstanceInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.UpdateInstance",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_update_instance(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::UpdateInstance::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"UpdateInstance",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`UpdateInstanceInput`](crate::input::UpdateInstanceInput)
pub fn builder() -> crate::input::update_instance_input::Builder {
crate::input::update_instance_input::Builder::default()
}
}
/// See [`UpdateLayerInput`](crate::input::UpdateLayerInput)
pub mod update_layer_input {
/// A builder for [`UpdateLayerInput`](crate::input::UpdateLayerInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) layer_id: std::option::Option<std::string::String>,
pub(crate) name: std::option::Option<std::string::String>,
pub(crate) shortname: std::option::Option<std::string::String>,
pub(crate) attributes: std::option::Option<
std::collections::HashMap<crate::model::LayerAttributesKeys, std::string::String>,
>,
pub(crate) cloud_watch_logs_configuration:
std::option::Option<crate::model::CloudWatchLogsConfiguration>,
pub(crate) custom_instance_profile_arn: std::option::Option<std::string::String>,
pub(crate) custom_json: std::option::Option<std::string::String>,
pub(crate) custom_security_group_ids:
std::option::Option<std::vec::Vec<std::string::String>>,
pub(crate) packages: std::option::Option<std::vec::Vec<std::string::String>>,
pub(crate) volume_configurations:
std::option::Option<std::vec::Vec<crate::model::VolumeConfiguration>>,
pub(crate) enable_auto_healing: std::option::Option<bool>,
pub(crate) auto_assign_elastic_ips: std::option::Option<bool>,
pub(crate) auto_assign_public_ips: std::option::Option<bool>,
pub(crate) custom_recipes: std::option::Option<crate::model::Recipes>,
pub(crate) install_updates_on_boot: std::option::Option<bool>,
pub(crate) use_ebs_optimized_instances: std::option::Option<bool>,
pub(crate) lifecycle_event_configuration:
std::option::Option<crate::model::LifecycleEventConfiguration>,
}
impl Builder {
/// <p>The layer ID.</p>
pub fn layer_id(mut self, input: impl Into<std::string::String>) -> Self {
self.layer_id = Some(input.into());
self
}
/// <p>The layer ID.</p>
pub fn set_layer_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.layer_id = input;
self
}
/// <p>The layer name, which is used by the console.</p>
pub fn name(mut self, input: impl Into<std::string::String>) -> Self {
self.name = Some(input.into());
self
}
/// <p>The layer name, which is used by the console.</p>
pub fn set_name(mut self, input: std::option::Option<std::string::String>) -> Self {
self.name = input;
self
}
/// <p>For custom layers only, use this parameter to specify the layer's short name, which is used internally by AWS OpsWorks Stacks and by Chef. The short name is also used as the name for the directory where your app files are installed. It can have a maximum of 200 characters and must be in the following format: /\A[a-z0-9\-\_\.]+\Z/.</p>
/// <p>The built-in layers' short names are defined by AWS OpsWorks Stacks. For more information, see the <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/layers.html">Layer Reference</a>
/// </p>
pub fn shortname(mut self, input: impl Into<std::string::String>) -> Self {
self.shortname = Some(input.into());
self
}
/// <p>For custom layers only, use this parameter to specify the layer's short name, which is used internally by AWS OpsWorks Stacks and by Chef. The short name is also used as the name for the directory where your app files are installed. It can have a maximum of 200 characters and must be in the following format: /\A[a-z0-9\-\_\.]+\Z/.</p>
/// <p>The built-in layers' short names are defined by AWS OpsWorks Stacks. For more information, see the <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/layers.html">Layer Reference</a>
/// </p>
pub fn set_shortname(mut self, input: std::option::Option<std::string::String>) -> Self {
self.shortname = input;
self
}
/// Adds a key-value pair to `attributes`.
///
/// To override the contents of this collection use [`set_attributes`](Self::set_attributes).
///
/// <p>One or more user-defined key/value pairs to be added to the stack attributes.</p>
pub fn attributes(
mut self,
k: impl Into<crate::model::LayerAttributesKeys>,
v: impl Into<std::string::String>,
) -> Self {
let mut hash_map = self.attributes.unwrap_or_default();
hash_map.insert(k.into(), v.into());
self.attributes = Some(hash_map);
self
}
/// <p>One or more user-defined key/value pairs to be added to the stack attributes.</p>
pub fn set_attributes(
mut self,
input: std::option::Option<
std::collections::HashMap<crate::model::LayerAttributesKeys, std::string::String>,
>,
) -> Self {
self.attributes = input;
self
}
/// <p>Specifies CloudWatch Logs configuration options for the layer. For more information, see <a>CloudWatchLogsLogStream</a>.</p>
pub fn cloud_watch_logs_configuration(
mut self,
input: crate::model::CloudWatchLogsConfiguration,
) -> Self {
self.cloud_watch_logs_configuration = Some(input);
self
}
/// <p>Specifies CloudWatch Logs configuration options for the layer. For more information, see <a>CloudWatchLogsLogStream</a>.</p>
pub fn set_cloud_watch_logs_configuration(
mut self,
input: std::option::Option<crate::model::CloudWatchLogsConfiguration>,
) -> Self {
self.cloud_watch_logs_configuration = input;
self
}
/// <p>The ARN of an IAM profile to be used for all of the layer's EC2 instances. For more
/// information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub fn custom_instance_profile_arn(
mut self,
input: impl Into<std::string::String>,
) -> Self {
self.custom_instance_profile_arn = Some(input.into());
self
}
/// <p>The ARN of an IAM profile to be used for all of the layer's EC2 instances. For more
/// information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub fn set_custom_instance_profile_arn(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.custom_instance_profile_arn = input;
self
}
/// <p>A JSON-formatted string containing custom stack configuration and deployment attributes
/// to be installed on the layer's instances. For more information, see
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-json-override.html">
/// Using Custom JSON</a>.
/// </p>
pub fn custom_json(mut self, input: impl Into<std::string::String>) -> Self {
self.custom_json = Some(input.into());
self
}
/// <p>A JSON-formatted string containing custom stack configuration and deployment attributes
/// to be installed on the layer's instances. For more information, see
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-json-override.html">
/// Using Custom JSON</a>.
/// </p>
pub fn set_custom_json(mut self, input: std::option::Option<std::string::String>) -> Self {
self.custom_json = input;
self
}
/// Appends an item to `custom_security_group_ids`.
///
/// To override the contents of this collection use [`set_custom_security_group_ids`](Self::set_custom_security_group_ids).
///
/// <p>An array containing the layer's custom security group IDs.</p>
pub fn custom_security_group_ids(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.custom_security_group_ids.unwrap_or_default();
v.push(input.into());
self.custom_security_group_ids = Some(v);
self
}
/// <p>An array containing the layer's custom security group IDs.</p>
pub fn set_custom_security_group_ids(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.custom_security_group_ids = input;
self
}
/// Appends an item to `packages`.
///
/// To override the contents of this collection use [`set_packages`](Self::set_packages).
///
/// <p>An array of <code>Package</code> objects that describe the layer's packages.</p>
pub fn packages(mut self, input: impl Into<std::string::String>) -> Self {
let mut v = self.packages.unwrap_or_default();
v.push(input.into());
self.packages = Some(v);
self
}
/// <p>An array of <code>Package</code> objects that describe the layer's packages.</p>
pub fn set_packages(
mut self,
input: std::option::Option<std::vec::Vec<std::string::String>>,
) -> Self {
self.packages = input;
self
}
/// Appends an item to `volume_configurations`.
///
/// To override the contents of this collection use [`set_volume_configurations`](Self::set_volume_configurations).
///
/// <p>A <code>VolumeConfigurations</code> object that describes the layer's Amazon EBS volumes.</p>
pub fn volume_configurations(
mut self,
input: impl Into<crate::model::VolumeConfiguration>,
) -> Self {
let mut v = self.volume_configurations.unwrap_or_default();
v.push(input.into());
self.volume_configurations = Some(v);
self
}
/// <p>A <code>VolumeConfigurations</code> object that describes the layer's Amazon EBS volumes.</p>
pub fn set_volume_configurations(
mut self,
input: std::option::Option<std::vec::Vec<crate::model::VolumeConfiguration>>,
) -> Self {
self.volume_configurations = input;
self
}
/// <p>Whether to disable auto healing for the layer.</p>
pub fn enable_auto_healing(mut self, input: bool) -> Self {
self.enable_auto_healing = Some(input);
self
}
/// <p>Whether to disable auto healing for the layer.</p>
pub fn set_enable_auto_healing(mut self, input: std::option::Option<bool>) -> Self {
self.enable_auto_healing = input;
self
}
/// <p>Whether to automatically assign an <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html">Elastic IP
/// address</a> to the layer's instances. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html">How to Edit
/// a Layer</a>.</p>
pub fn auto_assign_elastic_ips(mut self, input: bool) -> Self {
self.auto_assign_elastic_ips = Some(input);
self
}
/// <p>Whether to automatically assign an <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html">Elastic IP
/// address</a> to the layer's instances. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html">How to Edit
/// a Layer</a>.</p>
pub fn set_auto_assign_elastic_ips(mut self, input: std::option::Option<bool>) -> Self {
self.auto_assign_elastic_ips = input;
self
}
/// <p>For stacks that are running in a VPC, whether to automatically assign a public IP address to
/// the layer's instances. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html">How to Edit
/// a Layer</a>.</p>
pub fn auto_assign_public_ips(mut self, input: bool) -> Self {
self.auto_assign_public_ips = Some(input);
self
}
/// <p>For stacks that are running in a VPC, whether to automatically assign a public IP address to
/// the layer's instances. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html">How to Edit
/// a Layer</a>.</p>
pub fn set_auto_assign_public_ips(mut self, input: std::option::Option<bool>) -> Self {
self.auto_assign_public_ips = input;
self
}
/// <p>A <code>LayerCustomRecipes</code> object that specifies the layer's custom recipes.</p>
pub fn custom_recipes(mut self, input: crate::model::Recipes) -> Self {
self.custom_recipes = Some(input);
self
}
/// <p>A <code>LayerCustomRecipes</code> object that specifies the layer's custom recipes.</p>
pub fn set_custom_recipes(
mut self,
input: std::option::Option<crate::model::Recipes>,
) -> Self {
self.custom_recipes = input;
self
}
/// <p>Whether to install operating system and package updates when the instance boots. The default
/// value is <code>true</code>. To control when updates are installed, set this value to
/// <code>false</code>. You must then update your instances manually by using
/// <a>CreateDeployment</a> to run the <code>update_dependencies</code> stack command or
/// manually running <code>yum</code> (Amazon Linux) or <code>apt-get</code> (Ubuntu) on the
/// instances. </p>
/// <note>
/// <p>We strongly recommend using the default value of <code>true</code>, to ensure that your
/// instances have the latest security updates.</p>
/// </note>
pub fn install_updates_on_boot(mut self, input: bool) -> Self {
self.install_updates_on_boot = Some(input);
self
}
/// <p>Whether to install operating system and package updates when the instance boots. The default
/// value is <code>true</code>. To control when updates are installed, set this value to
/// <code>false</code>. You must then update your instances manually by using
/// <a>CreateDeployment</a> to run the <code>update_dependencies</code> stack command or
/// manually running <code>yum</code> (Amazon Linux) or <code>apt-get</code> (Ubuntu) on the
/// instances. </p>
/// <note>
/// <p>We strongly recommend using the default value of <code>true</code>, to ensure that your
/// instances have the latest security updates.</p>
/// </note>
pub fn set_install_updates_on_boot(mut self, input: std::option::Option<bool>) -> Self {
self.install_updates_on_boot = input;
self
}
/// <p>Whether to use Amazon EBS-optimized instances.</p>
pub fn use_ebs_optimized_instances(mut self, input: bool) -> Self {
self.use_ebs_optimized_instances = Some(input);
self
}
/// <p>Whether to use Amazon EBS-optimized instances.</p>
pub fn set_use_ebs_optimized_instances(mut self, input: std::option::Option<bool>) -> Self {
self.use_ebs_optimized_instances = input;
self
}
/// <p></p>
pub fn lifecycle_event_configuration(
mut self,
input: crate::model::LifecycleEventConfiguration,
) -> Self {
self.lifecycle_event_configuration = Some(input);
self
}
/// <p></p>
pub fn set_lifecycle_event_configuration(
mut self,
input: std::option::Option<crate::model::LifecycleEventConfiguration>,
) -> Self {
self.lifecycle_event_configuration = input;
self
}
/// Consumes the builder and constructs a [`UpdateLayerInput`](crate::input::UpdateLayerInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::UpdateLayerInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::UpdateLayerInput {
layer_id: self.layer_id,
name: self.name,
shortname: self.shortname,
attributes: self.attributes,
cloud_watch_logs_configuration: self.cloud_watch_logs_configuration,
custom_instance_profile_arn: self.custom_instance_profile_arn,
custom_json: self.custom_json,
custom_security_group_ids: self.custom_security_group_ids,
packages: self.packages,
volume_configurations: self.volume_configurations,
enable_auto_healing: self.enable_auto_healing,
auto_assign_elastic_ips: self.auto_assign_elastic_ips,
auto_assign_public_ips: self.auto_assign_public_ips,
custom_recipes: self.custom_recipes,
install_updates_on_boot: self.install_updates_on_boot,
use_ebs_optimized_instances: self.use_ebs_optimized_instances,
lifecycle_event_configuration: self.lifecycle_event_configuration,
})
}
}
}
#[doc(hidden)]
pub type UpdateLayerInputOperationOutputAlias = crate::operation::UpdateLayer;
#[doc(hidden)]
pub type UpdateLayerInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl UpdateLayerInput {
/// Consumes the builder and constructs an Operation<[`UpdateLayer`](crate::operation::UpdateLayer)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::UpdateLayer,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::UpdateLayerInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::UpdateLayerInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::UpdateLayerInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.UpdateLayer",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_update_layer(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::UpdateLayer::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"UpdateLayer",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`UpdateLayerInput`](crate::input::UpdateLayerInput)
pub fn builder() -> crate::input::update_layer_input::Builder {
crate::input::update_layer_input::Builder::default()
}
}
/// See [`UpdateMyUserProfileInput`](crate::input::UpdateMyUserProfileInput)
pub mod update_my_user_profile_input {
/// A builder for [`UpdateMyUserProfileInput`](crate::input::UpdateMyUserProfileInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) ssh_public_key: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The user's SSH public key.</p>
pub fn ssh_public_key(mut self, input: impl Into<std::string::String>) -> Self {
self.ssh_public_key = Some(input.into());
self
}
/// <p>The user's SSH public key.</p>
pub fn set_ssh_public_key(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.ssh_public_key = input;
self
}
/// Consumes the builder and constructs a [`UpdateMyUserProfileInput`](crate::input::UpdateMyUserProfileInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::UpdateMyUserProfileInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::UpdateMyUserProfileInput {
ssh_public_key: self.ssh_public_key,
})
}
}
}
#[doc(hidden)]
pub type UpdateMyUserProfileInputOperationOutputAlias = crate::operation::UpdateMyUserProfile;
#[doc(hidden)]
pub type UpdateMyUserProfileInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl UpdateMyUserProfileInput {
/// Consumes the builder and constructs an Operation<[`UpdateMyUserProfile`](crate::operation::UpdateMyUserProfile)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::UpdateMyUserProfile,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::UpdateMyUserProfileInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::UpdateMyUserProfileInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::UpdateMyUserProfileInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.UpdateMyUserProfile",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_update_my_user_profile(
&self,
)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::UpdateMyUserProfile::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"UpdateMyUserProfile",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`UpdateMyUserProfileInput`](crate::input::UpdateMyUserProfileInput)
pub fn builder() -> crate::input::update_my_user_profile_input::Builder {
crate::input::update_my_user_profile_input::Builder::default()
}
}
/// See [`UpdateRdsDbInstanceInput`](crate::input::UpdateRdsDbInstanceInput)
pub mod update_rds_db_instance_input {
/// A builder for [`UpdateRdsDbInstanceInput`](crate::input::UpdateRdsDbInstanceInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) rds_db_instance_arn: std::option::Option<std::string::String>,
pub(crate) db_user: std::option::Option<std::string::String>,
pub(crate) db_password: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The Amazon RDS instance's ARN.</p>
pub fn rds_db_instance_arn(mut self, input: impl Into<std::string::String>) -> Self {
self.rds_db_instance_arn = Some(input.into());
self
}
/// <p>The Amazon RDS instance's ARN.</p>
pub fn set_rds_db_instance_arn(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.rds_db_instance_arn = input;
self
}
/// <p>The master user name.</p>
pub fn db_user(mut self, input: impl Into<std::string::String>) -> Self {
self.db_user = Some(input.into());
self
}
/// <p>The master user name.</p>
pub fn set_db_user(mut self, input: std::option::Option<std::string::String>) -> Self {
self.db_user = input;
self
}
/// <p>The database password.</p>
pub fn db_password(mut self, input: impl Into<std::string::String>) -> Self {
self.db_password = Some(input.into());
self
}
/// <p>The database password.</p>
pub fn set_db_password(mut self, input: std::option::Option<std::string::String>) -> Self {
self.db_password = input;
self
}
/// Consumes the builder and constructs a [`UpdateRdsDbInstanceInput`](crate::input::UpdateRdsDbInstanceInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::UpdateRdsDbInstanceInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::UpdateRdsDbInstanceInput {
rds_db_instance_arn: self.rds_db_instance_arn,
db_user: self.db_user,
db_password: self.db_password,
})
}
}
}
#[doc(hidden)]
pub type UpdateRdsDbInstanceInputOperationOutputAlias = crate::operation::UpdateRdsDbInstance;
#[doc(hidden)]
pub type UpdateRdsDbInstanceInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl UpdateRdsDbInstanceInput {
/// Consumes the builder and constructs an Operation<[`UpdateRdsDbInstance`](crate::operation::UpdateRdsDbInstance)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::UpdateRdsDbInstance,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::UpdateRdsDbInstanceInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::UpdateRdsDbInstanceInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::UpdateRdsDbInstanceInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.UpdateRdsDbInstance",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_update_rds_db_instance(
&self,
)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::UpdateRdsDbInstance::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"UpdateRdsDbInstance",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`UpdateRdsDbInstanceInput`](crate::input::UpdateRdsDbInstanceInput)
pub fn builder() -> crate::input::update_rds_db_instance_input::Builder {
crate::input::update_rds_db_instance_input::Builder::default()
}
}
/// See [`UpdateStackInput`](crate::input::UpdateStackInput)
pub mod update_stack_input {
/// A builder for [`UpdateStackInput`](crate::input::UpdateStackInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) stack_id: std::option::Option<std::string::String>,
pub(crate) name: std::option::Option<std::string::String>,
pub(crate) attributes: std::option::Option<
std::collections::HashMap<crate::model::StackAttributesKeys, std::string::String>,
>,
pub(crate) service_role_arn: std::option::Option<std::string::String>,
pub(crate) default_instance_profile_arn: std::option::Option<std::string::String>,
pub(crate) default_os: std::option::Option<std::string::String>,
pub(crate) hostname_theme: std::option::Option<std::string::String>,
pub(crate) default_availability_zone: std::option::Option<std::string::String>,
pub(crate) default_subnet_id: std::option::Option<std::string::String>,
pub(crate) custom_json: std::option::Option<std::string::String>,
pub(crate) configuration_manager:
std::option::Option<crate::model::StackConfigurationManager>,
pub(crate) chef_configuration: std::option::Option<crate::model::ChefConfiguration>,
pub(crate) use_custom_cookbooks: std::option::Option<bool>,
pub(crate) custom_cookbooks_source: std::option::Option<crate::model::Source>,
pub(crate) default_ssh_key_name: std::option::Option<std::string::String>,
pub(crate) default_root_device_type: std::option::Option<crate::model::RootDeviceType>,
pub(crate) use_opsworks_security_groups: std::option::Option<bool>,
pub(crate) agent_version: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The stack ID.</p>
pub fn stack_id(mut self, input: impl Into<std::string::String>) -> Self {
self.stack_id = Some(input.into());
self
}
/// <p>The stack ID.</p>
pub fn set_stack_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.stack_id = input;
self
}
/// <p>The stack's new name.</p>
pub fn name(mut self, input: impl Into<std::string::String>) -> Self {
self.name = Some(input.into());
self
}
/// <p>The stack's new name.</p>
pub fn set_name(mut self, input: std::option::Option<std::string::String>) -> Self {
self.name = input;
self
}
/// Adds a key-value pair to `attributes`.
///
/// To override the contents of this collection use [`set_attributes`](Self::set_attributes).
///
/// <p>One or more user-defined key-value pairs to be added to the stack attributes.</p>
pub fn attributes(
mut self,
k: impl Into<crate::model::StackAttributesKeys>,
v: impl Into<std::string::String>,
) -> Self {
let mut hash_map = self.attributes.unwrap_or_default();
hash_map.insert(k.into(), v.into());
self.attributes = Some(hash_map);
self
}
/// <p>One or more user-defined key-value pairs to be added to the stack attributes.</p>
pub fn set_attributes(
mut self,
input: std::option::Option<
std::collections::HashMap<crate::model::StackAttributesKeys, std::string::String>,
>,
) -> Self {
self.attributes = input;
self
}
/// <p>Do not use this parameter. You cannot update a stack's service role.</p>
pub fn service_role_arn(mut self, input: impl Into<std::string::String>) -> Self {
self.service_role_arn = Some(input.into());
self
}
/// <p>Do not use this parameter. You cannot update a stack's service role.</p>
pub fn set_service_role_arn(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.service_role_arn = input;
self
}
/// <p>The ARN of an IAM profile that is the default profile for all of the stack's EC2 instances.
/// For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub fn default_instance_profile_arn(
mut self,
input: impl Into<std::string::String>,
) -> Self {
self.default_instance_profile_arn = Some(input.into());
self
}
/// <p>The ARN of an IAM profile that is the default profile for all of the stack's EC2 instances.
/// For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub fn set_default_instance_profile_arn(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.default_instance_profile_arn = input;
self
}
/// <p>The stack's operating system, which must be set to one of the following:</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux 2016.09</code>,
/// <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>A supported Windows operating system, such as <code>Microsoft Windows Server 2012 R2 Base</code>, <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// <li>
/// <p>A custom AMI: <code>Custom</code>. You specify the custom AMI you want to use when
/// you create instances. For more information about how to use custom AMIs with OpsWorks, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using
/// Custom AMIs</a>.</p>
/// </li>
/// </ul>
/// <p>The default option is the stack's current operating system.
/// For more information about supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
pub fn default_os(mut self, input: impl Into<std::string::String>) -> Self {
self.default_os = Some(input.into());
self
}
/// <p>The stack's operating system, which must be set to one of the following:</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux 2016.09</code>,
/// <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>A supported Windows operating system, such as <code>Microsoft Windows Server 2012 R2 Base</code>, <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// <li>
/// <p>A custom AMI: <code>Custom</code>. You specify the custom AMI you want to use when
/// you create instances. For more information about how to use custom AMIs with OpsWorks, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using
/// Custom AMIs</a>.</p>
/// </li>
/// </ul>
/// <p>The default option is the stack's current operating system.
/// For more information about supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
pub fn set_default_os(mut self, input: std::option::Option<std::string::String>) -> Self {
self.default_os = input;
self
}
/// <p>The stack's new host name theme, with spaces replaced by underscores.
/// The theme is used to generate host names for the stack's instances.
/// By default, <code>HostnameTheme</code> is set to <code>Layer_Dependent</code>, which creates host names by appending integers to the
/// layer's short name. The other themes are:</p>
/// <ul>
/// <li>
/// <p>
/// <code>Baked_Goods</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Clouds</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Europe_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Fruits</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Greek_Deities_and_Titans</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Legendary_creatures_from_Japan</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Planets_and_Moons</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Roman_Deities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Scottish_Islands</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>US_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Wild_Cats</code>
/// </p>
/// </li>
/// </ul>
/// <p>To obtain a generated host name, call <code>GetHostNameSuggestion</code>, which returns a
/// host name based on the current theme.</p>
pub fn hostname_theme(mut self, input: impl Into<std::string::String>) -> Self {
self.hostname_theme = Some(input.into());
self
}
/// <p>The stack's new host name theme, with spaces replaced by underscores.
/// The theme is used to generate host names for the stack's instances.
/// By default, <code>HostnameTheme</code> is set to <code>Layer_Dependent</code>, which creates host names by appending integers to the
/// layer's short name. The other themes are:</p>
/// <ul>
/// <li>
/// <p>
/// <code>Baked_Goods</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Clouds</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Europe_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Fruits</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Greek_Deities_and_Titans</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Legendary_creatures_from_Japan</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Planets_and_Moons</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Roman_Deities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Scottish_Islands</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>US_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Wild_Cats</code>
/// </p>
/// </li>
/// </ul>
/// <p>To obtain a generated host name, call <code>GetHostNameSuggestion</code>, which returns a
/// host name based on the current theme.</p>
pub fn set_hostname_theme(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.hostname_theme = input;
self
}
/// <p>The stack's default Availability Zone, which must be in the
/// stack's region. For more
/// information, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and
/// Endpoints</a>. If you also specify a value for <code>DefaultSubnetId</code>, the subnet must
/// be in the same zone. For more information, see <a>CreateStack</a>. </p>
pub fn default_availability_zone(mut self, input: impl Into<std::string::String>) -> Self {
self.default_availability_zone = Some(input.into());
self
}
/// <p>The stack's default Availability Zone, which must be in the
/// stack's region. For more
/// information, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and
/// Endpoints</a>. If you also specify a value for <code>DefaultSubnetId</code>, the subnet must
/// be in the same zone. For more information, see <a>CreateStack</a>. </p>
pub fn set_default_availability_zone(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.default_availability_zone = input;
self
}
/// <p>The stack's default VPC subnet ID. This parameter is required if you specify a value for the
/// <code>VpcId</code> parameter. All instances are launched into this subnet unless you specify
/// otherwise when you create the instance. If you also specify a value for
/// <code>DefaultAvailabilityZone</code>, the subnet must be in that zone. For information on
/// default values and when this parameter is required, see the <code>VpcId</code> parameter
/// description. </p>
pub fn default_subnet_id(mut self, input: impl Into<std::string::String>) -> Self {
self.default_subnet_id = Some(input.into());
self
}
/// <p>The stack's default VPC subnet ID. This parameter is required if you specify a value for the
/// <code>VpcId</code> parameter. All instances are launched into this subnet unless you specify
/// otherwise when you create the instance. If you also specify a value for
/// <code>DefaultAvailabilityZone</code>, the subnet must be in that zone. For information on
/// default values and when this parameter is required, see the <code>VpcId</code> parameter
/// description. </p>
pub fn set_default_subnet_id(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.default_subnet_id = input;
self
}
/// <p>A string that contains user-defined, custom JSON. It can be used to override the corresponding default stack configuration JSON values or to pass data to recipes. The string should be in the following format:</p>
/// <p>
/// <code>"{\"key1\": \"value1\", \"key2\": \"value2\",...}"</code>
/// </p>
/// <p>For more information about custom JSON, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html">Use Custom JSON to
/// Modify the Stack Configuration Attributes</a>.</p>
pub fn custom_json(mut self, input: impl Into<std::string::String>) -> Self {
self.custom_json = Some(input.into());
self
}
/// <p>A string that contains user-defined, custom JSON. It can be used to override the corresponding default stack configuration JSON values or to pass data to recipes. The string should be in the following format:</p>
/// <p>
/// <code>"{\"key1\": \"value1\", \"key2\": \"value2\",...}"</code>
/// </p>
/// <p>For more information about custom JSON, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html">Use Custom JSON to
/// Modify the Stack Configuration Attributes</a>.</p>
pub fn set_custom_json(mut self, input: std::option::Option<std::string::String>) -> Self {
self.custom_json = input;
self
}
/// <p>The configuration manager. When you update a stack, we recommend that you use the configuration manager to specify the Chef version: 12, 11.10, or 11.4 for Linux stacks, or 12.2 for Windows stacks. The default value for Linux stacks is currently 12.</p>
pub fn configuration_manager(
mut self,
input: crate::model::StackConfigurationManager,
) -> Self {
self.configuration_manager = Some(input);
self
}
/// <p>The configuration manager. When you update a stack, we recommend that you use the configuration manager to specify the Chef version: 12, 11.10, or 11.4 for Linux stacks, or 12.2 for Windows stacks. The default value for Linux stacks is currently 12.</p>
pub fn set_configuration_manager(
mut self,
input: std::option::Option<crate::model::StackConfigurationManager>,
) -> Self {
self.configuration_manager = input;
self
}
/// <p>A <code>ChefConfiguration</code> object that specifies whether to enable Berkshelf and the
/// Berkshelf version on Chef 11.10 stacks. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New Stack</a>.</p>
pub fn chef_configuration(mut self, input: crate::model::ChefConfiguration) -> Self {
self.chef_configuration = Some(input);
self
}
/// <p>A <code>ChefConfiguration</code> object that specifies whether to enable Berkshelf and the
/// Berkshelf version on Chef 11.10 stacks. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New Stack</a>.</p>
pub fn set_chef_configuration(
mut self,
input: std::option::Option<crate::model::ChefConfiguration>,
) -> Self {
self.chef_configuration = input;
self
}
/// <p>Whether the stack uses custom cookbooks.</p>
pub fn use_custom_cookbooks(mut self, input: bool) -> Self {
self.use_custom_cookbooks = Some(input);
self
}
/// <p>Whether the stack uses custom cookbooks.</p>
pub fn set_use_custom_cookbooks(mut self, input: std::option::Option<bool>) -> Self {
self.use_custom_cookbooks = input;
self
}
/// <p>Contains the information required to retrieve an app or cookbook from a repository. For more information,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html">Adding Apps</a> or <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook.html">Cookbooks and Recipes</a>.</p>
pub fn custom_cookbooks_source(mut self, input: crate::model::Source) -> Self {
self.custom_cookbooks_source = Some(input);
self
}
/// <p>Contains the information required to retrieve an app or cookbook from a repository. For more information,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html">Adding Apps</a> or <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook.html">Cookbooks and Recipes</a>.</p>
pub fn set_custom_cookbooks_source(
mut self,
input: std::option::Option<crate::model::Source>,
) -> Self {
self.custom_cookbooks_source = input;
self
}
/// <p>A default Amazon EC2 key-pair name. The default value is
/// <code>none</code>. If you specify a key-pair name,
/// AWS OpsWorks Stacks installs the public key on the instance and you can use the private key with an SSH
/// client to log in to the instance. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-ssh.html"> Using SSH to
/// Communicate with an Instance</a> and <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-ssh-access.html"> Managing SSH
/// Access</a>. You can override this setting by specifying a different key pair, or no key
/// pair, when you <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-add.html">
/// create an instance</a>. </p>
pub fn default_ssh_key_name(mut self, input: impl Into<std::string::String>) -> Self {
self.default_ssh_key_name = Some(input.into());
self
}
/// <p>A default Amazon EC2 key-pair name. The default value is
/// <code>none</code>. If you specify a key-pair name,
/// AWS OpsWorks Stacks installs the public key on the instance and you can use the private key with an SSH
/// client to log in to the instance. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-ssh.html"> Using SSH to
/// Communicate with an Instance</a> and <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-ssh-access.html"> Managing SSH
/// Access</a>. You can override this setting by specifying a different key pair, or no key
/// pair, when you <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-add.html">
/// create an instance</a>. </p>
pub fn set_default_ssh_key_name(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.default_ssh_key_name = input;
self
}
/// <p>The default root device type. This value is used by default for all instances in the stack,
/// but you can override it when you create an instance. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device">Storage for the Root Device</a>.</p>
pub fn default_root_device_type(mut self, input: crate::model::RootDeviceType) -> Self {
self.default_root_device_type = Some(input);
self
}
/// <p>The default root device type. This value is used by default for all instances in the stack,
/// but you can override it when you create an instance. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device">Storage for the Root Device</a>.</p>
pub fn set_default_root_device_type(
mut self,
input: std::option::Option<crate::model::RootDeviceType>,
) -> Self {
self.default_root_device_type = input;
self
}
/// <p>Whether to associate the AWS OpsWorks Stacks built-in security groups with the stack's layers.</p>
/// <p>AWS OpsWorks Stacks provides a standard set of built-in security groups, one for each layer, which are
/// associated with layers by default. <code>UseOpsworksSecurityGroups</code> allows you to
/// provide your own custom security groups
/// instead of using the built-in groups. <code>UseOpsworksSecurityGroups</code> has
/// the following settings: </p>
/// <ul>
/// <li>
/// <p>True - AWS OpsWorks Stacks automatically associates the appropriate built-in security group with each layer (default setting). You can associate additional security groups with a layer after you create it, but you cannot delete the built-in security group.</p>
/// </li>
/// <li>
/// <p>False - AWS OpsWorks Stacks does not associate built-in security groups with layers. You must create appropriate EC2 security groups and associate a security group with each layer that you create. However, you can still manually associate a built-in security group with a layer on. Custom security groups are required only for those layers that need custom settings.</p>
/// </li>
/// </ul>
/// <p>For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New
/// Stack</a>.</p>
pub fn use_opsworks_security_groups(mut self, input: bool) -> Self {
self.use_opsworks_security_groups = Some(input);
self
}
/// <p>Whether to associate the AWS OpsWorks Stacks built-in security groups with the stack's layers.</p>
/// <p>AWS OpsWorks Stacks provides a standard set of built-in security groups, one for each layer, which are
/// associated with layers by default. <code>UseOpsworksSecurityGroups</code> allows you to
/// provide your own custom security groups
/// instead of using the built-in groups. <code>UseOpsworksSecurityGroups</code> has
/// the following settings: </p>
/// <ul>
/// <li>
/// <p>True - AWS OpsWorks Stacks automatically associates the appropriate built-in security group with each layer (default setting). You can associate additional security groups with a layer after you create it, but you cannot delete the built-in security group.</p>
/// </li>
/// <li>
/// <p>False - AWS OpsWorks Stacks does not associate built-in security groups with layers. You must create appropriate EC2 security groups and associate a security group with each layer that you create. However, you can still manually associate a built-in security group with a layer on. Custom security groups are required only for those layers that need custom settings.</p>
/// </li>
/// </ul>
/// <p>For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New
/// Stack</a>.</p>
pub fn set_use_opsworks_security_groups(
mut self,
input: std::option::Option<bool>,
) -> Self {
self.use_opsworks_security_groups = input;
self
}
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>Auto-update - Set this parameter to <code>LATEST</code>. AWS OpsWorks Stacks
/// automatically installs new agent versions on the stack's instances as soon as
/// they are available.</p>
/// </li>
/// <li>
/// <p>Fixed version - Set this parameter to your preferred agent version. To update the agent version, you must edit the stack configuration and specify a new version. AWS OpsWorks Stacks then automatically installs that version on the stack's instances.</p>
/// </li>
/// </ul>
/// <p>The default setting is <code>LATEST</code>. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>.
/// AgentVersion cannot be set to Chef 12.2.</p>
/// <note>
/// <p>You can also specify an agent version when you create or update an instance, which overrides the stack's default setting.</p>
/// </note>
pub fn agent_version(mut self, input: impl Into<std::string::String>) -> Self {
self.agent_version = Some(input.into());
self
}
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>Auto-update - Set this parameter to <code>LATEST</code>. AWS OpsWorks Stacks
/// automatically installs new agent versions on the stack's instances as soon as
/// they are available.</p>
/// </li>
/// <li>
/// <p>Fixed version - Set this parameter to your preferred agent version. To update the agent version, you must edit the stack configuration and specify a new version. AWS OpsWorks Stacks then automatically installs that version on the stack's instances.</p>
/// </li>
/// </ul>
/// <p>The default setting is <code>LATEST</code>. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>.
/// AgentVersion cannot be set to Chef 12.2.</p>
/// <note>
/// <p>You can also specify an agent version when you create or update an instance, which overrides the stack's default setting.</p>
/// </note>
pub fn set_agent_version(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.agent_version = input;
self
}
/// Consumes the builder and constructs a [`UpdateStackInput`](crate::input::UpdateStackInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::UpdateStackInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::UpdateStackInput {
stack_id: self.stack_id,
name: self.name,
attributes: self.attributes,
service_role_arn: self.service_role_arn,
default_instance_profile_arn: self.default_instance_profile_arn,
default_os: self.default_os,
hostname_theme: self.hostname_theme,
default_availability_zone: self.default_availability_zone,
default_subnet_id: self.default_subnet_id,
custom_json: self.custom_json,
configuration_manager: self.configuration_manager,
chef_configuration: self.chef_configuration,
use_custom_cookbooks: self.use_custom_cookbooks,
custom_cookbooks_source: self.custom_cookbooks_source,
default_ssh_key_name: self.default_ssh_key_name,
default_root_device_type: self.default_root_device_type,
use_opsworks_security_groups: self.use_opsworks_security_groups,
agent_version: self.agent_version,
})
}
}
}
#[doc(hidden)]
pub type UpdateStackInputOperationOutputAlias = crate::operation::UpdateStack;
#[doc(hidden)]
pub type UpdateStackInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl UpdateStackInput {
/// Consumes the builder and constructs an Operation<[`UpdateStack`](crate::operation::UpdateStack)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::UpdateStack,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::UpdateStackInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::UpdateStackInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::UpdateStackInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.UpdateStack",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_update_stack(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::UpdateStack::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"UpdateStack",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`UpdateStackInput`](crate::input::UpdateStackInput)
pub fn builder() -> crate::input::update_stack_input::Builder {
crate::input::update_stack_input::Builder::default()
}
}
/// See [`UpdateUserProfileInput`](crate::input::UpdateUserProfileInput)
pub mod update_user_profile_input {
/// A builder for [`UpdateUserProfileInput`](crate::input::UpdateUserProfileInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) iam_user_arn: std::option::Option<std::string::String>,
pub(crate) ssh_username: std::option::Option<std::string::String>,
pub(crate) ssh_public_key: std::option::Option<std::string::String>,
pub(crate) allow_self_management: std::option::Option<bool>,
}
impl Builder {
/// <p>The user IAM ARN. This can also be a federated user's ARN.</p>
pub fn iam_user_arn(mut self, input: impl Into<std::string::String>) -> Self {
self.iam_user_arn = Some(input.into());
self
}
/// <p>The user IAM ARN. This can also be a federated user's ARN.</p>
pub fn set_iam_user_arn(mut self, input: std::option::Option<std::string::String>) -> Self {
self.iam_user_arn = input;
self
}
/// <p>The user's SSH user name. The allowable characters are [a-z], [A-Z], [0-9], '-', and '_'. If
/// the specified name includes other punctuation marks, AWS OpsWorks Stacks removes them. For example,
/// <code>my.name</code> will be changed to <code>myname</code>. If you do not specify an SSH
/// user name, AWS OpsWorks Stacks generates one from the IAM user name. </p>
pub fn ssh_username(mut self, input: impl Into<std::string::String>) -> Self {
self.ssh_username = Some(input.into());
self
}
/// <p>The user's SSH user name. The allowable characters are [a-z], [A-Z], [0-9], '-', and '_'. If
/// the specified name includes other punctuation marks, AWS OpsWorks Stacks removes them. For example,
/// <code>my.name</code> will be changed to <code>myname</code>. If you do not specify an SSH
/// user name, AWS OpsWorks Stacks generates one from the IAM user name. </p>
pub fn set_ssh_username(mut self, input: std::option::Option<std::string::String>) -> Self {
self.ssh_username = input;
self
}
/// <p>The user's new SSH public key.</p>
pub fn ssh_public_key(mut self, input: impl Into<std::string::String>) -> Self {
self.ssh_public_key = Some(input.into());
self
}
/// <p>The user's new SSH public key.</p>
pub fn set_ssh_public_key(
mut self,
input: std::option::Option<std::string::String>,
) -> Self {
self.ssh_public_key = input;
self
}
/// <p>Whether users can specify their own SSH public key through the My Settings page. For more
/// information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-settingsshkey.html">Managing User
/// Permissions</a>.</p>
pub fn allow_self_management(mut self, input: bool) -> Self {
self.allow_self_management = Some(input);
self
}
/// <p>Whether users can specify their own SSH public key through the My Settings page. For more
/// information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-settingsshkey.html">Managing User
/// Permissions</a>.</p>
pub fn set_allow_self_management(mut self, input: std::option::Option<bool>) -> Self {
self.allow_self_management = input;
self
}
/// Consumes the builder and constructs a [`UpdateUserProfileInput`](crate::input::UpdateUserProfileInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::UpdateUserProfileInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::UpdateUserProfileInput {
iam_user_arn: self.iam_user_arn,
ssh_username: self.ssh_username,
ssh_public_key: self.ssh_public_key,
allow_self_management: self.allow_self_management,
})
}
}
}
#[doc(hidden)]
pub type UpdateUserProfileInputOperationOutputAlias = crate::operation::UpdateUserProfile;
#[doc(hidden)]
pub type UpdateUserProfileInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl UpdateUserProfileInput {
/// Consumes the builder and constructs an Operation<[`UpdateUserProfile`](crate::operation::UpdateUserProfile)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::UpdateUserProfile,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::UpdateUserProfileInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::UpdateUserProfileInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::UpdateUserProfileInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.UpdateUserProfile",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body =
crate::operation_ser::serialize_operation_crate_operation_update_user_profile(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::UpdateUserProfile::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"UpdateUserProfile",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`UpdateUserProfileInput`](crate::input::UpdateUserProfileInput)
pub fn builder() -> crate::input::update_user_profile_input::Builder {
crate::input::update_user_profile_input::Builder::default()
}
}
/// See [`UpdateVolumeInput`](crate::input::UpdateVolumeInput)
pub mod update_volume_input {
/// A builder for [`UpdateVolumeInput`](crate::input::UpdateVolumeInput)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) volume_id: std::option::Option<std::string::String>,
pub(crate) name: std::option::Option<std::string::String>,
pub(crate) mount_point: std::option::Option<std::string::String>,
}
impl Builder {
/// <p>The volume ID.</p>
pub fn volume_id(mut self, input: impl Into<std::string::String>) -> Self {
self.volume_id = Some(input.into());
self
}
/// <p>The volume ID.</p>
pub fn set_volume_id(mut self, input: std::option::Option<std::string::String>) -> Self {
self.volume_id = input;
self
}
/// <p>The new name.</p>
pub fn name(mut self, input: impl Into<std::string::String>) -> Self {
self.name = Some(input.into());
self
}
/// <p>The new name.</p>
pub fn set_name(mut self, input: std::option::Option<std::string::String>) -> Self {
self.name = input;
self
}
/// <p>The new mount point.</p>
pub fn mount_point(mut self, input: impl Into<std::string::String>) -> Self {
self.mount_point = Some(input.into());
self
}
/// <p>The new mount point.</p>
pub fn set_mount_point(mut self, input: std::option::Option<std::string::String>) -> Self {
self.mount_point = input;
self
}
/// Consumes the builder and constructs a [`UpdateVolumeInput`](crate::input::UpdateVolumeInput)
pub fn build(
self,
) -> std::result::Result<
crate::input::UpdateVolumeInput,
aws_smithy_http::operation::BuildError,
> {
Ok(crate::input::UpdateVolumeInput {
volume_id: self.volume_id,
name: self.name,
mount_point: self.mount_point,
})
}
}
}
#[doc(hidden)]
pub type UpdateVolumeInputOperationOutputAlias = crate::operation::UpdateVolume;
#[doc(hidden)]
pub type UpdateVolumeInputOperationRetryAlias = aws_http::AwsErrorRetryPolicy;
impl UpdateVolumeInput {
/// Consumes the builder and constructs an Operation<[`UpdateVolume`](crate::operation::UpdateVolume)>
#[allow(clippy::let_and_return)]
#[allow(clippy::needless_borrow)]
pub async fn make_operation(
&self,
_config: &crate::config::Config,
) -> std::result::Result<
aws_smithy_http::operation::Operation<
crate::operation::UpdateVolume,
aws_http::AwsErrorRetryPolicy,
>,
aws_smithy_http::operation::BuildError,
> {
fn uri_base(
_input: &crate::input::UpdateVolumeInput,
output: &mut String,
) -> Result<(), aws_smithy_http::operation::BuildError> {
write!(output, "/").expect("formatting should succeed");
Ok(())
}
#[allow(clippy::unnecessary_wraps)]
fn update_http_builder(
input: &crate::input::UpdateVolumeInput,
builder: http::request::Builder,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
let mut uri = String::new();
uri_base(input, &mut uri)?;
Ok(builder.method("POST").uri(uri))
}
#[allow(clippy::unnecessary_wraps)]
fn request_builder_base(
input: &crate::input::UpdateVolumeInput,
) -> std::result::Result<http::request::Builder, aws_smithy_http::operation::BuildError>
{
#[allow(unused_mut)]
let mut builder = update_http_builder(input, http::request::Builder::new())?;
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("content-type"),
"application/x-amz-json-1.1",
);
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::HeaderName::from_static("x-amz-target"),
"OpsWorks_20130218.UpdateVolume",
);
Ok(builder)
}
let properties = aws_smithy_http::property_bag::SharedPropertyBag::new();
let request = request_builder_base(&self)?;
let body = crate::operation_ser::serialize_operation_crate_operation_update_volume(&self)?;
let request = Self::assemble(request, body);
#[allow(unused_mut)]
let mut request = aws_smithy_http::operation::Request::from_parts(
request.map(aws_smithy_http::body::SdkBody::from),
properties,
);
let mut user_agent = aws_http::user_agent::AwsUserAgent::new_from_environment(
aws_types::os_shim_internal::Env::real(),
crate::API_METADATA.clone(),
);
if let Some(app_name) = _config.app_name() {
user_agent = user_agent.with_app_name(app_name.clone());
}
request.properties_mut().insert(user_agent);
#[allow(unused_mut)]
let mut signing_config = aws_sig_auth::signer::OperationSigningConfig::default_config();
request.properties_mut().insert(signing_config);
request
.properties_mut()
.insert(aws_types::SigningService::from_static(
_config.signing_service(),
));
aws_endpoint::set_endpoint_resolver(
&mut request.properties_mut(),
_config.endpoint_resolver.clone(),
);
if let Some(region) = &_config.region {
request.properties_mut().insert(region.clone());
}
aws_http::auth::set_provider(
&mut request.properties_mut(),
_config.credentials_provider.clone(),
);
let op = aws_smithy_http::operation::Operation::new(
request,
crate::operation::UpdateVolume::new(),
)
.with_metadata(aws_smithy_http::operation::Metadata::new(
"UpdateVolume",
"opsworks",
));
let op = op.with_retry_policy(aws_http::AwsErrorRetryPolicy::new());
Ok(op)
}
fn assemble(
builder: http::request::Builder,
body: aws_smithy_http::body::SdkBody,
) -> http::request::Request<aws_smithy_http::body::SdkBody> {
let mut builder = builder;
if let Some(content_length) = body.content_length() {
builder = aws_smithy_http::header::set_header_if_absent(
builder,
http::header::CONTENT_LENGTH,
content_length,
);
}
builder.body(body).expect("should be valid request")
}
/// Creates a new builder-style object to manufacture [`UpdateVolumeInput`](crate::input::UpdateVolumeInput)
pub fn builder() -> crate::input::update_volume_input::Builder {
crate::input::update_volume_input::Builder::default()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct UpdateVolumeInput {
/// <p>The volume ID.</p>
pub volume_id: std::option::Option<std::string::String>,
/// <p>The new name.</p>
pub name: std::option::Option<std::string::String>,
/// <p>The new mount point.</p>
pub mount_point: std::option::Option<std::string::String>,
}
impl UpdateVolumeInput {
/// <p>The volume ID.</p>
pub fn volume_id(&self) -> std::option::Option<&str> {
self.volume_id.as_deref()
}
/// <p>The new name.</p>
pub fn name(&self) -> std::option::Option<&str> {
self.name.as_deref()
}
/// <p>The new mount point.</p>
pub fn mount_point(&self) -> std::option::Option<&str> {
self.mount_point.as_deref()
}
}
impl std::fmt::Debug for UpdateVolumeInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("UpdateVolumeInput");
formatter.field("volume_id", &self.volume_id);
formatter.field("name", &self.name);
formatter.field("mount_point", &self.mount_point);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct UpdateUserProfileInput {
/// <p>The user IAM ARN. This can also be a federated user's ARN.</p>
pub iam_user_arn: std::option::Option<std::string::String>,
/// <p>The user's SSH user name. The allowable characters are [a-z], [A-Z], [0-9], '-', and '_'. If
/// the specified name includes other punctuation marks, AWS OpsWorks Stacks removes them. For example,
/// <code>my.name</code> will be changed to <code>myname</code>. If you do not specify an SSH
/// user name, AWS OpsWorks Stacks generates one from the IAM user name. </p>
pub ssh_username: std::option::Option<std::string::String>,
/// <p>The user's new SSH public key.</p>
pub ssh_public_key: std::option::Option<std::string::String>,
/// <p>Whether users can specify their own SSH public key through the My Settings page. For more
/// information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-settingsshkey.html">Managing User
/// Permissions</a>.</p>
pub allow_self_management: std::option::Option<bool>,
}
impl UpdateUserProfileInput {
/// <p>The user IAM ARN. This can also be a federated user's ARN.</p>
pub fn iam_user_arn(&self) -> std::option::Option<&str> {
self.iam_user_arn.as_deref()
}
/// <p>The user's SSH user name. The allowable characters are [a-z], [A-Z], [0-9], '-', and '_'. If
/// the specified name includes other punctuation marks, AWS OpsWorks Stacks removes them. For example,
/// <code>my.name</code> will be changed to <code>myname</code>. If you do not specify an SSH
/// user name, AWS OpsWorks Stacks generates one from the IAM user name. </p>
pub fn ssh_username(&self) -> std::option::Option<&str> {
self.ssh_username.as_deref()
}
/// <p>The user's new SSH public key.</p>
pub fn ssh_public_key(&self) -> std::option::Option<&str> {
self.ssh_public_key.as_deref()
}
/// <p>Whether users can specify their own SSH public key through the My Settings page. For more
/// information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-settingsshkey.html">Managing User
/// Permissions</a>.</p>
pub fn allow_self_management(&self) -> std::option::Option<bool> {
self.allow_self_management
}
}
impl std::fmt::Debug for UpdateUserProfileInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("UpdateUserProfileInput");
formatter.field("iam_user_arn", &self.iam_user_arn);
formatter.field("ssh_username", &self.ssh_username);
formatter.field("ssh_public_key", &self.ssh_public_key);
formatter.field("allow_self_management", &self.allow_self_management);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct UpdateStackInput {
/// <p>The stack ID.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>The stack's new name.</p>
pub name: std::option::Option<std::string::String>,
/// <p>One or more user-defined key-value pairs to be added to the stack attributes.</p>
pub attributes: std::option::Option<
std::collections::HashMap<crate::model::StackAttributesKeys, std::string::String>,
>,
/// <p>Do not use this parameter. You cannot update a stack's service role.</p>
pub service_role_arn: std::option::Option<std::string::String>,
/// <p>The ARN of an IAM profile that is the default profile for all of the stack's EC2 instances.
/// For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub default_instance_profile_arn: std::option::Option<std::string::String>,
/// <p>The stack's operating system, which must be set to one of the following:</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux 2016.09</code>,
/// <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>A supported Windows operating system, such as <code>Microsoft Windows Server 2012 R2 Base</code>, <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// <li>
/// <p>A custom AMI: <code>Custom</code>. You specify the custom AMI you want to use when
/// you create instances. For more information about how to use custom AMIs with OpsWorks, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using
/// Custom AMIs</a>.</p>
/// </li>
/// </ul>
/// <p>The default option is the stack's current operating system.
/// For more information about supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
pub default_os: std::option::Option<std::string::String>,
/// <p>The stack's new host name theme, with spaces replaced by underscores.
/// The theme is used to generate host names for the stack's instances.
/// By default, <code>HostnameTheme</code> is set to <code>Layer_Dependent</code>, which creates host names by appending integers to the
/// layer's short name. The other themes are:</p>
/// <ul>
/// <li>
/// <p>
/// <code>Baked_Goods</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Clouds</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Europe_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Fruits</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Greek_Deities_and_Titans</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Legendary_creatures_from_Japan</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Planets_and_Moons</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Roman_Deities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Scottish_Islands</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>US_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Wild_Cats</code>
/// </p>
/// </li>
/// </ul>
/// <p>To obtain a generated host name, call <code>GetHostNameSuggestion</code>, which returns a
/// host name based on the current theme.</p>
pub hostname_theme: std::option::Option<std::string::String>,
/// <p>The stack's default Availability Zone, which must be in the
/// stack's region. For more
/// information, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and
/// Endpoints</a>. If you also specify a value for <code>DefaultSubnetId</code>, the subnet must
/// be in the same zone. For more information, see <a>CreateStack</a>. </p>
pub default_availability_zone: std::option::Option<std::string::String>,
/// <p>The stack's default VPC subnet ID. This parameter is required if you specify a value for the
/// <code>VpcId</code> parameter. All instances are launched into this subnet unless you specify
/// otherwise when you create the instance. If you also specify a value for
/// <code>DefaultAvailabilityZone</code>, the subnet must be in that zone. For information on
/// default values and when this parameter is required, see the <code>VpcId</code> parameter
/// description. </p>
pub default_subnet_id: std::option::Option<std::string::String>,
/// <p>A string that contains user-defined, custom JSON. It can be used to override the corresponding default stack configuration JSON values or to pass data to recipes. The string should be in the following format:</p>
/// <p>
/// <code>"{\"key1\": \"value1\", \"key2\": \"value2\",...}"</code>
/// </p>
/// <p>For more information about custom JSON, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html">Use Custom JSON to
/// Modify the Stack Configuration Attributes</a>.</p>
pub custom_json: std::option::Option<std::string::String>,
/// <p>The configuration manager. When you update a stack, we recommend that you use the configuration manager to specify the Chef version: 12, 11.10, or 11.4 for Linux stacks, or 12.2 for Windows stacks. The default value for Linux stacks is currently 12.</p>
pub configuration_manager: std::option::Option<crate::model::StackConfigurationManager>,
/// <p>A <code>ChefConfiguration</code> object that specifies whether to enable Berkshelf and the
/// Berkshelf version on Chef 11.10 stacks. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New Stack</a>.</p>
pub chef_configuration: std::option::Option<crate::model::ChefConfiguration>,
/// <p>Whether the stack uses custom cookbooks.</p>
pub use_custom_cookbooks: std::option::Option<bool>,
/// <p>Contains the information required to retrieve an app or cookbook from a repository. For more information,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html">Adding Apps</a> or <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook.html">Cookbooks and Recipes</a>.</p>
pub custom_cookbooks_source: std::option::Option<crate::model::Source>,
/// <p>A default Amazon EC2 key-pair name. The default value is
/// <code>none</code>. If you specify a key-pair name,
/// AWS OpsWorks Stacks installs the public key on the instance and you can use the private key with an SSH
/// client to log in to the instance. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-ssh.html"> Using SSH to
/// Communicate with an Instance</a> and <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-ssh-access.html"> Managing SSH
/// Access</a>. You can override this setting by specifying a different key pair, or no key
/// pair, when you <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-add.html">
/// create an instance</a>. </p>
pub default_ssh_key_name: std::option::Option<std::string::String>,
/// <p>The default root device type. This value is used by default for all instances in the stack,
/// but you can override it when you create an instance. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device">Storage for the Root Device</a>.</p>
pub default_root_device_type: std::option::Option<crate::model::RootDeviceType>,
/// <p>Whether to associate the AWS OpsWorks Stacks built-in security groups with the stack's layers.</p>
/// <p>AWS OpsWorks Stacks provides a standard set of built-in security groups, one for each layer, which are
/// associated with layers by default. <code>UseOpsworksSecurityGroups</code> allows you to
/// provide your own custom security groups
/// instead of using the built-in groups. <code>UseOpsworksSecurityGroups</code> has
/// the following settings: </p>
/// <ul>
/// <li>
/// <p>True - AWS OpsWorks Stacks automatically associates the appropriate built-in security group with each layer (default setting). You can associate additional security groups with a layer after you create it, but you cannot delete the built-in security group.</p>
/// </li>
/// <li>
/// <p>False - AWS OpsWorks Stacks does not associate built-in security groups with layers. You must create appropriate EC2 security groups and associate a security group with each layer that you create. However, you can still manually associate a built-in security group with a layer on. Custom security groups are required only for those layers that need custom settings.</p>
/// </li>
/// </ul>
/// <p>For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New
/// Stack</a>.</p>
pub use_opsworks_security_groups: std::option::Option<bool>,
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>Auto-update - Set this parameter to <code>LATEST</code>. AWS OpsWorks Stacks
/// automatically installs new agent versions on the stack's instances as soon as
/// they are available.</p>
/// </li>
/// <li>
/// <p>Fixed version - Set this parameter to your preferred agent version. To update the agent version, you must edit the stack configuration and specify a new version. AWS OpsWorks Stacks then automatically installs that version on the stack's instances.</p>
/// </li>
/// </ul>
/// <p>The default setting is <code>LATEST</code>. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>.
/// AgentVersion cannot be set to Chef 12.2.</p>
/// <note>
/// <p>You can also specify an agent version when you create or update an instance, which overrides the stack's default setting.</p>
/// </note>
pub agent_version: std::option::Option<std::string::String>,
}
impl UpdateStackInput {
/// <p>The stack ID.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>The stack's new name.</p>
pub fn name(&self) -> std::option::Option<&str> {
self.name.as_deref()
}
/// <p>One or more user-defined key-value pairs to be added to the stack attributes.</p>
pub fn attributes(
&self,
) -> std::option::Option<
&std::collections::HashMap<crate::model::StackAttributesKeys, std::string::String>,
> {
self.attributes.as_ref()
}
/// <p>Do not use this parameter. You cannot update a stack's service role.</p>
pub fn service_role_arn(&self) -> std::option::Option<&str> {
self.service_role_arn.as_deref()
}
/// <p>The ARN of an IAM profile that is the default profile for all of the stack's EC2 instances.
/// For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub fn default_instance_profile_arn(&self) -> std::option::Option<&str> {
self.default_instance_profile_arn.as_deref()
}
/// <p>The stack's operating system, which must be set to one of the following:</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux 2016.09</code>,
/// <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>A supported Windows operating system, such as <code>Microsoft Windows Server 2012 R2 Base</code>, <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// <li>
/// <p>A custom AMI: <code>Custom</code>. You specify the custom AMI you want to use when
/// you create instances. For more information about how to use custom AMIs with OpsWorks, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using
/// Custom AMIs</a>.</p>
/// </li>
/// </ul>
/// <p>The default option is the stack's current operating system.
/// For more information about supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
pub fn default_os(&self) -> std::option::Option<&str> {
self.default_os.as_deref()
}
/// <p>The stack's new host name theme, with spaces replaced by underscores.
/// The theme is used to generate host names for the stack's instances.
/// By default, <code>HostnameTheme</code> is set to <code>Layer_Dependent</code>, which creates host names by appending integers to the
/// layer's short name. The other themes are:</p>
/// <ul>
/// <li>
/// <p>
/// <code>Baked_Goods</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Clouds</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Europe_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Fruits</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Greek_Deities_and_Titans</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Legendary_creatures_from_Japan</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Planets_and_Moons</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Roman_Deities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Scottish_Islands</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>US_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Wild_Cats</code>
/// </p>
/// </li>
/// </ul>
/// <p>To obtain a generated host name, call <code>GetHostNameSuggestion</code>, which returns a
/// host name based on the current theme.</p>
pub fn hostname_theme(&self) -> std::option::Option<&str> {
self.hostname_theme.as_deref()
}
/// <p>The stack's default Availability Zone, which must be in the
/// stack's region. For more
/// information, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and
/// Endpoints</a>. If you also specify a value for <code>DefaultSubnetId</code>, the subnet must
/// be in the same zone. For more information, see <a>CreateStack</a>. </p>
pub fn default_availability_zone(&self) -> std::option::Option<&str> {
self.default_availability_zone.as_deref()
}
/// <p>The stack's default VPC subnet ID. This parameter is required if you specify a value for the
/// <code>VpcId</code> parameter. All instances are launched into this subnet unless you specify
/// otherwise when you create the instance. If you also specify a value for
/// <code>DefaultAvailabilityZone</code>, the subnet must be in that zone. For information on
/// default values and when this parameter is required, see the <code>VpcId</code> parameter
/// description. </p>
pub fn default_subnet_id(&self) -> std::option::Option<&str> {
self.default_subnet_id.as_deref()
}
/// <p>A string that contains user-defined, custom JSON. It can be used to override the corresponding default stack configuration JSON values or to pass data to recipes. The string should be in the following format:</p>
/// <p>
/// <code>"{\"key1\": \"value1\", \"key2\": \"value2\",...}"</code>
/// </p>
/// <p>For more information about custom JSON, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html">Use Custom JSON to
/// Modify the Stack Configuration Attributes</a>.</p>
pub fn custom_json(&self) -> std::option::Option<&str> {
self.custom_json.as_deref()
}
/// <p>The configuration manager. When you update a stack, we recommend that you use the configuration manager to specify the Chef version: 12, 11.10, or 11.4 for Linux stacks, or 12.2 for Windows stacks. The default value for Linux stacks is currently 12.</p>
pub fn configuration_manager(
&self,
) -> std::option::Option<&crate::model::StackConfigurationManager> {
self.configuration_manager.as_ref()
}
/// <p>A <code>ChefConfiguration</code> object that specifies whether to enable Berkshelf and the
/// Berkshelf version on Chef 11.10 stacks. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New Stack</a>.</p>
pub fn chef_configuration(&self) -> std::option::Option<&crate::model::ChefConfiguration> {
self.chef_configuration.as_ref()
}
/// <p>Whether the stack uses custom cookbooks.</p>
pub fn use_custom_cookbooks(&self) -> std::option::Option<bool> {
self.use_custom_cookbooks
}
/// <p>Contains the information required to retrieve an app or cookbook from a repository. For more information,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html">Adding Apps</a> or <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook.html">Cookbooks and Recipes</a>.</p>
pub fn custom_cookbooks_source(&self) -> std::option::Option<&crate::model::Source> {
self.custom_cookbooks_source.as_ref()
}
/// <p>A default Amazon EC2 key-pair name. The default value is
/// <code>none</code>. If you specify a key-pair name,
/// AWS OpsWorks Stacks installs the public key on the instance and you can use the private key with an SSH
/// client to log in to the instance. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-ssh.html"> Using SSH to
/// Communicate with an Instance</a> and <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-ssh-access.html"> Managing SSH
/// Access</a>. You can override this setting by specifying a different key pair, or no key
/// pair, when you <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-add.html">
/// create an instance</a>. </p>
pub fn default_ssh_key_name(&self) -> std::option::Option<&str> {
self.default_ssh_key_name.as_deref()
}
/// <p>The default root device type. This value is used by default for all instances in the stack,
/// but you can override it when you create an instance. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device">Storage for the Root Device</a>.</p>
pub fn default_root_device_type(&self) -> std::option::Option<&crate::model::RootDeviceType> {
self.default_root_device_type.as_ref()
}
/// <p>Whether to associate the AWS OpsWorks Stacks built-in security groups with the stack's layers.</p>
/// <p>AWS OpsWorks Stacks provides a standard set of built-in security groups, one for each layer, which are
/// associated with layers by default. <code>UseOpsworksSecurityGroups</code> allows you to
/// provide your own custom security groups
/// instead of using the built-in groups. <code>UseOpsworksSecurityGroups</code> has
/// the following settings: </p>
/// <ul>
/// <li>
/// <p>True - AWS OpsWorks Stacks automatically associates the appropriate built-in security group with each layer (default setting). You can associate additional security groups with a layer after you create it, but you cannot delete the built-in security group.</p>
/// </li>
/// <li>
/// <p>False - AWS OpsWorks Stacks does not associate built-in security groups with layers. You must create appropriate EC2 security groups and associate a security group with each layer that you create. However, you can still manually associate a built-in security group with a layer on. Custom security groups are required only for those layers that need custom settings.</p>
/// </li>
/// </ul>
/// <p>For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New
/// Stack</a>.</p>
pub fn use_opsworks_security_groups(&self) -> std::option::Option<bool> {
self.use_opsworks_security_groups
}
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>Auto-update - Set this parameter to <code>LATEST</code>. AWS OpsWorks Stacks
/// automatically installs new agent versions on the stack's instances as soon as
/// they are available.</p>
/// </li>
/// <li>
/// <p>Fixed version - Set this parameter to your preferred agent version. To update the agent version, you must edit the stack configuration and specify a new version. AWS OpsWorks Stacks then automatically installs that version on the stack's instances.</p>
/// </li>
/// </ul>
/// <p>The default setting is <code>LATEST</code>. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>.
/// AgentVersion cannot be set to Chef 12.2.</p>
/// <note>
/// <p>You can also specify an agent version when you create or update an instance, which overrides the stack's default setting.</p>
/// </note>
pub fn agent_version(&self) -> std::option::Option<&str> {
self.agent_version.as_deref()
}
}
impl std::fmt::Debug for UpdateStackInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("UpdateStackInput");
formatter.field("stack_id", &self.stack_id);
formatter.field("name", &self.name);
formatter.field("attributes", &self.attributes);
formatter.field("service_role_arn", &self.service_role_arn);
formatter.field(
"default_instance_profile_arn",
&self.default_instance_profile_arn,
);
formatter.field("default_os", &self.default_os);
formatter.field("hostname_theme", &self.hostname_theme);
formatter.field("default_availability_zone", &self.default_availability_zone);
formatter.field("default_subnet_id", &self.default_subnet_id);
formatter.field("custom_json", &self.custom_json);
formatter.field("configuration_manager", &self.configuration_manager);
formatter.field("chef_configuration", &self.chef_configuration);
formatter.field("use_custom_cookbooks", &self.use_custom_cookbooks);
formatter.field("custom_cookbooks_source", &self.custom_cookbooks_source);
formatter.field("default_ssh_key_name", &self.default_ssh_key_name);
formatter.field("default_root_device_type", &self.default_root_device_type);
formatter.field(
"use_opsworks_security_groups",
&self.use_opsworks_security_groups,
);
formatter.field("agent_version", &self.agent_version);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct UpdateRdsDbInstanceInput {
/// <p>The Amazon RDS instance's ARN.</p>
pub rds_db_instance_arn: std::option::Option<std::string::String>,
/// <p>The master user name.</p>
pub db_user: std::option::Option<std::string::String>,
/// <p>The database password.</p>
pub db_password: std::option::Option<std::string::String>,
}
impl UpdateRdsDbInstanceInput {
/// <p>The Amazon RDS instance's ARN.</p>
pub fn rds_db_instance_arn(&self) -> std::option::Option<&str> {
self.rds_db_instance_arn.as_deref()
}
/// <p>The master user name.</p>
pub fn db_user(&self) -> std::option::Option<&str> {
self.db_user.as_deref()
}
/// <p>The database password.</p>
pub fn db_password(&self) -> std::option::Option<&str> {
self.db_password.as_deref()
}
}
impl std::fmt::Debug for UpdateRdsDbInstanceInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("UpdateRdsDbInstanceInput");
formatter.field("rds_db_instance_arn", &self.rds_db_instance_arn);
formatter.field("db_user", &self.db_user);
formatter.field("db_password", &self.db_password);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct UpdateMyUserProfileInput {
/// <p>The user's SSH public key.</p>
pub ssh_public_key: std::option::Option<std::string::String>,
}
impl UpdateMyUserProfileInput {
/// <p>The user's SSH public key.</p>
pub fn ssh_public_key(&self) -> std::option::Option<&str> {
self.ssh_public_key.as_deref()
}
}
impl std::fmt::Debug for UpdateMyUserProfileInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("UpdateMyUserProfileInput");
formatter.field("ssh_public_key", &self.ssh_public_key);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct UpdateLayerInput {
/// <p>The layer ID.</p>
pub layer_id: std::option::Option<std::string::String>,
/// <p>The layer name, which is used by the console.</p>
pub name: std::option::Option<std::string::String>,
/// <p>For custom layers only, use this parameter to specify the layer's short name, which is used internally by AWS OpsWorks Stacks and by Chef. The short name is also used as the name for the directory where your app files are installed. It can have a maximum of 200 characters and must be in the following format: /\A[a-z0-9\-\_\.]+\Z/.</p>
/// <p>The built-in layers' short names are defined by AWS OpsWorks Stacks. For more information, see the <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/layers.html">Layer Reference</a>
/// </p>
pub shortname: std::option::Option<std::string::String>,
/// <p>One or more user-defined key/value pairs to be added to the stack attributes.</p>
pub attributes: std::option::Option<
std::collections::HashMap<crate::model::LayerAttributesKeys, std::string::String>,
>,
/// <p>Specifies CloudWatch Logs configuration options for the layer. For more information, see <a>CloudWatchLogsLogStream</a>.</p>
pub cloud_watch_logs_configuration:
std::option::Option<crate::model::CloudWatchLogsConfiguration>,
/// <p>The ARN of an IAM profile to be used for all of the layer's EC2 instances. For more
/// information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub custom_instance_profile_arn: std::option::Option<std::string::String>,
/// <p>A JSON-formatted string containing custom stack configuration and deployment attributes
/// to be installed on the layer's instances. For more information, see
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-json-override.html">
/// Using Custom JSON</a>.
/// </p>
pub custom_json: std::option::Option<std::string::String>,
/// <p>An array containing the layer's custom security group IDs.</p>
pub custom_security_group_ids: std::option::Option<std::vec::Vec<std::string::String>>,
/// <p>An array of <code>Package</code> objects that describe the layer's packages.</p>
pub packages: std::option::Option<std::vec::Vec<std::string::String>>,
/// <p>A <code>VolumeConfigurations</code> object that describes the layer's Amazon EBS volumes.</p>
pub volume_configurations:
std::option::Option<std::vec::Vec<crate::model::VolumeConfiguration>>,
/// <p>Whether to disable auto healing for the layer.</p>
pub enable_auto_healing: std::option::Option<bool>,
/// <p>Whether to automatically assign an <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html">Elastic IP
/// address</a> to the layer's instances. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html">How to Edit
/// a Layer</a>.</p>
pub auto_assign_elastic_ips: std::option::Option<bool>,
/// <p>For stacks that are running in a VPC, whether to automatically assign a public IP address to
/// the layer's instances. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html">How to Edit
/// a Layer</a>.</p>
pub auto_assign_public_ips: std::option::Option<bool>,
/// <p>A <code>LayerCustomRecipes</code> object that specifies the layer's custom recipes.</p>
pub custom_recipes: std::option::Option<crate::model::Recipes>,
/// <p>Whether to install operating system and package updates when the instance boots. The default
/// value is <code>true</code>. To control when updates are installed, set this value to
/// <code>false</code>. You must then update your instances manually by using
/// <a>CreateDeployment</a> to run the <code>update_dependencies</code> stack command or
/// manually running <code>yum</code> (Amazon Linux) or <code>apt-get</code> (Ubuntu) on the
/// instances. </p>
/// <note>
/// <p>We strongly recommend using the default value of <code>true</code>, to ensure that your
/// instances have the latest security updates.</p>
/// </note>
pub install_updates_on_boot: std::option::Option<bool>,
/// <p>Whether to use Amazon EBS-optimized instances.</p>
pub use_ebs_optimized_instances: std::option::Option<bool>,
/// <p></p>
pub lifecycle_event_configuration:
std::option::Option<crate::model::LifecycleEventConfiguration>,
}
impl UpdateLayerInput {
/// <p>The layer ID.</p>
pub fn layer_id(&self) -> std::option::Option<&str> {
self.layer_id.as_deref()
}
/// <p>The layer name, which is used by the console.</p>
pub fn name(&self) -> std::option::Option<&str> {
self.name.as_deref()
}
/// <p>For custom layers only, use this parameter to specify the layer's short name, which is used internally by AWS OpsWorks Stacks and by Chef. The short name is also used as the name for the directory where your app files are installed. It can have a maximum of 200 characters and must be in the following format: /\A[a-z0-9\-\_\.]+\Z/.</p>
/// <p>The built-in layers' short names are defined by AWS OpsWorks Stacks. For more information, see the <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/layers.html">Layer Reference</a>
/// </p>
pub fn shortname(&self) -> std::option::Option<&str> {
self.shortname.as_deref()
}
/// <p>One or more user-defined key/value pairs to be added to the stack attributes.</p>
pub fn attributes(
&self,
) -> std::option::Option<
&std::collections::HashMap<crate::model::LayerAttributesKeys, std::string::String>,
> {
self.attributes.as_ref()
}
/// <p>Specifies CloudWatch Logs configuration options for the layer. For more information, see <a>CloudWatchLogsLogStream</a>.</p>
pub fn cloud_watch_logs_configuration(
&self,
) -> std::option::Option<&crate::model::CloudWatchLogsConfiguration> {
self.cloud_watch_logs_configuration.as_ref()
}
/// <p>The ARN of an IAM profile to be used for all of the layer's EC2 instances. For more
/// information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub fn custom_instance_profile_arn(&self) -> std::option::Option<&str> {
self.custom_instance_profile_arn.as_deref()
}
/// <p>A JSON-formatted string containing custom stack configuration and deployment attributes
/// to be installed on the layer's instances. For more information, see
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-json-override.html">
/// Using Custom JSON</a>.
/// </p>
pub fn custom_json(&self) -> std::option::Option<&str> {
self.custom_json.as_deref()
}
/// <p>An array containing the layer's custom security group IDs.</p>
pub fn custom_security_group_ids(&self) -> std::option::Option<&[std::string::String]> {
self.custom_security_group_ids.as_deref()
}
/// <p>An array of <code>Package</code> objects that describe the layer's packages.</p>
pub fn packages(&self) -> std::option::Option<&[std::string::String]> {
self.packages.as_deref()
}
/// <p>A <code>VolumeConfigurations</code> object that describes the layer's Amazon EBS volumes.</p>
pub fn volume_configurations(
&self,
) -> std::option::Option<&[crate::model::VolumeConfiguration]> {
self.volume_configurations.as_deref()
}
/// <p>Whether to disable auto healing for the layer.</p>
pub fn enable_auto_healing(&self) -> std::option::Option<bool> {
self.enable_auto_healing
}
/// <p>Whether to automatically assign an <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html">Elastic IP
/// address</a> to the layer's instances. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html">How to Edit
/// a Layer</a>.</p>
pub fn auto_assign_elastic_ips(&self) -> std::option::Option<bool> {
self.auto_assign_elastic_ips
}
/// <p>For stacks that are running in a VPC, whether to automatically assign a public IP address to
/// the layer's instances. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html">How to Edit
/// a Layer</a>.</p>
pub fn auto_assign_public_ips(&self) -> std::option::Option<bool> {
self.auto_assign_public_ips
}
/// <p>A <code>LayerCustomRecipes</code> object that specifies the layer's custom recipes.</p>
pub fn custom_recipes(&self) -> std::option::Option<&crate::model::Recipes> {
self.custom_recipes.as_ref()
}
/// <p>Whether to install operating system and package updates when the instance boots. The default
/// value is <code>true</code>. To control when updates are installed, set this value to
/// <code>false</code>. You must then update your instances manually by using
/// <a>CreateDeployment</a> to run the <code>update_dependencies</code> stack command or
/// manually running <code>yum</code> (Amazon Linux) or <code>apt-get</code> (Ubuntu) on the
/// instances. </p>
/// <note>
/// <p>We strongly recommend using the default value of <code>true</code>, to ensure that your
/// instances have the latest security updates.</p>
/// </note>
pub fn install_updates_on_boot(&self) -> std::option::Option<bool> {
self.install_updates_on_boot
}
/// <p>Whether to use Amazon EBS-optimized instances.</p>
pub fn use_ebs_optimized_instances(&self) -> std::option::Option<bool> {
self.use_ebs_optimized_instances
}
/// <p></p>
pub fn lifecycle_event_configuration(
&self,
) -> std::option::Option<&crate::model::LifecycleEventConfiguration> {
self.lifecycle_event_configuration.as_ref()
}
}
impl std::fmt::Debug for UpdateLayerInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("UpdateLayerInput");
formatter.field("layer_id", &self.layer_id);
formatter.field("name", &self.name);
formatter.field("shortname", &self.shortname);
formatter.field("attributes", &self.attributes);
formatter.field(
"cloud_watch_logs_configuration",
&self.cloud_watch_logs_configuration,
);
formatter.field(
"custom_instance_profile_arn",
&self.custom_instance_profile_arn,
);
formatter.field("custom_json", &self.custom_json);
formatter.field("custom_security_group_ids", &self.custom_security_group_ids);
formatter.field("packages", &self.packages);
formatter.field("volume_configurations", &self.volume_configurations);
formatter.field("enable_auto_healing", &self.enable_auto_healing);
formatter.field("auto_assign_elastic_ips", &self.auto_assign_elastic_ips);
formatter.field("auto_assign_public_ips", &self.auto_assign_public_ips);
formatter.field("custom_recipes", &self.custom_recipes);
formatter.field("install_updates_on_boot", &self.install_updates_on_boot);
formatter.field(
"use_ebs_optimized_instances",
&self.use_ebs_optimized_instances,
);
formatter.field(
"lifecycle_event_configuration",
&self.lifecycle_event_configuration,
);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct UpdateInstanceInput {
/// <p>The instance ID.</p>
pub instance_id: std::option::Option<std::string::String>,
/// <p>The instance's layer IDs.</p>
pub layer_ids: std::option::Option<std::vec::Vec<std::string::String>>,
/// <p>The instance type, such as <code>t2.micro</code>. For a list of supported instance types,
/// open the stack in the console, choose <b>Instances</b>, and choose <b>+ Instance</b>.
/// The <b>Size</b> list contains the currently supported types. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html">Instance
/// Families and Types</a>. The parameter values that you use to specify the various types are
/// in the <b>API Name</b> column of the <b>Available Instance Types</b> table.</p>
pub instance_type: std::option::Option<std::string::String>,
/// <p>For load-based or time-based instances, the type. Windows stacks can use only time-based instances.</p>
pub auto_scaling_type: std::option::Option<crate::model::AutoScalingType>,
/// <p>The instance host name.</p>
pub hostname: std::option::Option<std::string::String>,
/// <p>The instance's operating system, which must be set to one of the following. You cannot update an instance that is using a custom AMI.</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux 2016.09</code>, <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux
/// 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>A supported Windows operating system, such as <code>Microsoft Windows Server 2012 R2 Base</code>, <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// </ul>
/// <p>For more information about supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
/// <p>The default option is the current Amazon Linux version. If you set this parameter to
/// <code>Custom</code>, you must use the AmiId parameter to
/// specify the custom AMI that you want to use. For more information about supported operating
/// systems, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">Operating Systems</a>. For more information about how to use custom AMIs with OpsWorks, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using
/// Custom AMIs</a>.</p>
/// <note>
/// <p>You can specify a different Linux operating system for the updated stack, but you cannot change from Linux to Windows or Windows to Linux.</p>
/// </note>
pub os: std::option::Option<std::string::String>,
/// <p>The ID of the AMI that was used to create the instance. The value of this parameter must be the same AMI ID that the instance is already using.
/// You cannot apply a new AMI to an instance by running UpdateInstance. UpdateInstance does not work on instances that are using custom AMIs.
/// </p>
pub ami_id: std::option::Option<std::string::String>,
/// <p>The instance's Amazon EC2 key name.</p>
pub ssh_key_name: std::option::Option<std::string::String>,
/// <p>The instance architecture. Instance types do not necessarily support both architectures. For
/// a list of the architectures that are supported by the different instance types, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html">Instance
/// Families and Types</a>.</p>
pub architecture: std::option::Option<crate::model::Architecture>,
/// <p>Whether to install operating system and package updates when the instance boots. The default
/// value is <code>true</code>. To control when updates are installed, set this value to
/// <code>false</code>. You must then update your instances manually by using
/// <a>CreateDeployment</a> to run the <code>update_dependencies</code> stack command or
/// by manually running <code>yum</code> (Amazon Linux) or <code>apt-get</code> (Ubuntu) on the
/// instances. </p>
/// <note>
/// <p>We strongly recommend using the default value of <code>true</code>, to ensure that your
/// instances have the latest security updates.</p>
/// </note>
pub install_updates_on_boot: std::option::Option<bool>,
/// <p>This property cannot be updated.</p>
pub ebs_optimized: std::option::Option<bool>,
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>
/// <code>INHERIT</code> - Use the stack's default agent version setting.</p>
/// </li>
/// <li>
/// <p>
/// <i>version_number</i> - Use the specified agent version.
/// This value overrides the stack's default setting.
/// To update the agent version, you must edit the instance configuration and specify a
/// new version.
/// AWS OpsWorks Stacks then automatically installs that version on the instance.</p>
/// </li>
/// </ul>
/// <p>The default setting is <code>INHERIT</code>. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>.</p>
/// <p>AgentVersion cannot be set to Chef 12.2.</p>
pub agent_version: std::option::Option<std::string::String>,
}
impl UpdateInstanceInput {
/// <p>The instance ID.</p>
pub fn instance_id(&self) -> std::option::Option<&str> {
self.instance_id.as_deref()
}
/// <p>The instance's layer IDs.</p>
pub fn layer_ids(&self) -> std::option::Option<&[std::string::String]> {
self.layer_ids.as_deref()
}
/// <p>The instance type, such as <code>t2.micro</code>. For a list of supported instance types,
/// open the stack in the console, choose <b>Instances</b>, and choose <b>+ Instance</b>.
/// The <b>Size</b> list contains the currently supported types. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html">Instance
/// Families and Types</a>. The parameter values that you use to specify the various types are
/// in the <b>API Name</b> column of the <b>Available Instance Types</b> table.</p>
pub fn instance_type(&self) -> std::option::Option<&str> {
self.instance_type.as_deref()
}
/// <p>For load-based or time-based instances, the type. Windows stacks can use only time-based instances.</p>
pub fn auto_scaling_type(&self) -> std::option::Option<&crate::model::AutoScalingType> {
self.auto_scaling_type.as_ref()
}
/// <p>The instance host name.</p>
pub fn hostname(&self) -> std::option::Option<&str> {
self.hostname.as_deref()
}
/// <p>The instance's operating system, which must be set to one of the following. You cannot update an instance that is using a custom AMI.</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux 2016.09</code>, <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux
/// 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>A supported Windows operating system, such as <code>Microsoft Windows Server 2012 R2 Base</code>, <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// </ul>
/// <p>For more information about supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
/// <p>The default option is the current Amazon Linux version. If you set this parameter to
/// <code>Custom</code>, you must use the AmiId parameter to
/// specify the custom AMI that you want to use. For more information about supported operating
/// systems, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">Operating Systems</a>. For more information about how to use custom AMIs with OpsWorks, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using
/// Custom AMIs</a>.</p>
/// <note>
/// <p>You can specify a different Linux operating system for the updated stack, but you cannot change from Linux to Windows or Windows to Linux.</p>
/// </note>
pub fn os(&self) -> std::option::Option<&str> {
self.os.as_deref()
}
/// <p>The ID of the AMI that was used to create the instance. The value of this parameter must be the same AMI ID that the instance is already using.
/// You cannot apply a new AMI to an instance by running UpdateInstance. UpdateInstance does not work on instances that are using custom AMIs.
/// </p>
pub fn ami_id(&self) -> std::option::Option<&str> {
self.ami_id.as_deref()
}
/// <p>The instance's Amazon EC2 key name.</p>
pub fn ssh_key_name(&self) -> std::option::Option<&str> {
self.ssh_key_name.as_deref()
}
/// <p>The instance architecture. Instance types do not necessarily support both architectures. For
/// a list of the architectures that are supported by the different instance types, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html">Instance
/// Families and Types</a>.</p>
pub fn architecture(&self) -> std::option::Option<&crate::model::Architecture> {
self.architecture.as_ref()
}
/// <p>Whether to install operating system and package updates when the instance boots. The default
/// value is <code>true</code>. To control when updates are installed, set this value to
/// <code>false</code>. You must then update your instances manually by using
/// <a>CreateDeployment</a> to run the <code>update_dependencies</code> stack command or
/// by manually running <code>yum</code> (Amazon Linux) or <code>apt-get</code> (Ubuntu) on the
/// instances. </p>
/// <note>
/// <p>We strongly recommend using the default value of <code>true</code>, to ensure that your
/// instances have the latest security updates.</p>
/// </note>
pub fn install_updates_on_boot(&self) -> std::option::Option<bool> {
self.install_updates_on_boot
}
/// <p>This property cannot be updated.</p>
pub fn ebs_optimized(&self) -> std::option::Option<bool> {
self.ebs_optimized
}
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>
/// <code>INHERIT</code> - Use the stack's default agent version setting.</p>
/// </li>
/// <li>
/// <p>
/// <i>version_number</i> - Use the specified agent version.
/// This value overrides the stack's default setting.
/// To update the agent version, you must edit the instance configuration and specify a
/// new version.
/// AWS OpsWorks Stacks then automatically installs that version on the instance.</p>
/// </li>
/// </ul>
/// <p>The default setting is <code>INHERIT</code>. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>.</p>
/// <p>AgentVersion cannot be set to Chef 12.2.</p>
pub fn agent_version(&self) -> std::option::Option<&str> {
self.agent_version.as_deref()
}
}
impl std::fmt::Debug for UpdateInstanceInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("UpdateInstanceInput");
formatter.field("instance_id", &self.instance_id);
formatter.field("layer_ids", &self.layer_ids);
formatter.field("instance_type", &self.instance_type);
formatter.field("auto_scaling_type", &self.auto_scaling_type);
formatter.field("hostname", &self.hostname);
formatter.field("os", &self.os);
formatter.field("ami_id", &self.ami_id);
formatter.field("ssh_key_name", &self.ssh_key_name);
formatter.field("architecture", &self.architecture);
formatter.field("install_updates_on_boot", &self.install_updates_on_boot);
formatter.field("ebs_optimized", &self.ebs_optimized);
formatter.field("agent_version", &self.agent_version);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct UpdateElasticIpInput {
/// <p>The IP address for which you want to update the name.</p>
pub elastic_ip: std::option::Option<std::string::String>,
/// <p>The new name.</p>
pub name: std::option::Option<std::string::String>,
}
impl UpdateElasticIpInput {
/// <p>The IP address for which you want to update the name.</p>
pub fn elastic_ip(&self) -> std::option::Option<&str> {
self.elastic_ip.as_deref()
}
/// <p>The new name.</p>
pub fn name(&self) -> std::option::Option<&str> {
self.name.as_deref()
}
}
impl std::fmt::Debug for UpdateElasticIpInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("UpdateElasticIpInput");
formatter.field("elastic_ip", &self.elastic_ip);
formatter.field("name", &self.name);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct UpdateAppInput {
/// <p>The app ID.</p>
pub app_id: std::option::Option<std::string::String>,
/// <p>The app name.</p>
pub name: std::option::Option<std::string::String>,
/// <p>A description of the app.</p>
pub description: std::option::Option<std::string::String>,
/// <p>The app's data sources.</p>
pub data_sources: std::option::Option<std::vec::Vec<crate::model::DataSource>>,
/// <p>The app type.</p>
pub r#type: std::option::Option<crate::model::AppType>,
/// <p>A <code>Source</code> object that specifies the app repository.</p>
pub app_source: std::option::Option<crate::model::Source>,
/// <p>The app's virtual host settings, with multiple domains separated by commas. For example:
/// <code>'www.example.com, example.com'</code>
/// </p>
pub domains: std::option::Option<std::vec::Vec<std::string::String>>,
/// <p>Whether SSL is enabled for the app.</p>
pub enable_ssl: std::option::Option<bool>,
/// <p>An <code>SslConfiguration</code> object with the SSL configuration.</p>
pub ssl_configuration: std::option::Option<crate::model::SslConfiguration>,
/// <p>One or more user-defined key/value pairs to be added to the stack attributes.</p>
pub attributes: std::option::Option<
std::collections::HashMap<crate::model::AppAttributesKeys, std::string::String>,
>,
/// <p>An array of <code>EnvironmentVariable</code> objects that specify environment variables to be
/// associated with the app. After you deploy the app, these variables are defined on the
/// associated app server instances.For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html#workingapps-creating-environment"> Environment Variables</a>.</p>
/// <p>There is no specific limit on the number of environment variables. However, the size of the associated data structure - which includes the variables' names, values, and protected flag values - cannot exceed 20 KB. This limit should accommodate most if not all use cases. Exceeding it will cause an exception with the message, "Environment: is too large (maximum is 20 KB)."</p>
/// <note>
/// <p>If you have specified one or more environment variables, you cannot modify the stack's Chef version.</p>
/// </note>
pub environment: std::option::Option<std::vec::Vec<crate::model::EnvironmentVariable>>,
}
impl UpdateAppInput {
/// <p>The app ID.</p>
pub fn app_id(&self) -> std::option::Option<&str> {
self.app_id.as_deref()
}
/// <p>The app name.</p>
pub fn name(&self) -> std::option::Option<&str> {
self.name.as_deref()
}
/// <p>A description of the app.</p>
pub fn description(&self) -> std::option::Option<&str> {
self.description.as_deref()
}
/// <p>The app's data sources.</p>
pub fn data_sources(&self) -> std::option::Option<&[crate::model::DataSource]> {
self.data_sources.as_deref()
}
/// <p>The app type.</p>
pub fn r#type(&self) -> std::option::Option<&crate::model::AppType> {
self.r#type.as_ref()
}
/// <p>A <code>Source</code> object that specifies the app repository.</p>
pub fn app_source(&self) -> std::option::Option<&crate::model::Source> {
self.app_source.as_ref()
}
/// <p>The app's virtual host settings, with multiple domains separated by commas. For example:
/// <code>'www.example.com, example.com'</code>
/// </p>
pub fn domains(&self) -> std::option::Option<&[std::string::String]> {
self.domains.as_deref()
}
/// <p>Whether SSL is enabled for the app.</p>
pub fn enable_ssl(&self) -> std::option::Option<bool> {
self.enable_ssl
}
/// <p>An <code>SslConfiguration</code> object with the SSL configuration.</p>
pub fn ssl_configuration(&self) -> std::option::Option<&crate::model::SslConfiguration> {
self.ssl_configuration.as_ref()
}
/// <p>One or more user-defined key/value pairs to be added to the stack attributes.</p>
pub fn attributes(
&self,
) -> std::option::Option<
&std::collections::HashMap<crate::model::AppAttributesKeys, std::string::String>,
> {
self.attributes.as_ref()
}
/// <p>An array of <code>EnvironmentVariable</code> objects that specify environment variables to be
/// associated with the app. After you deploy the app, these variables are defined on the
/// associated app server instances.For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html#workingapps-creating-environment"> Environment Variables</a>.</p>
/// <p>There is no specific limit on the number of environment variables. However, the size of the associated data structure - which includes the variables' names, values, and protected flag values - cannot exceed 20 KB. This limit should accommodate most if not all use cases. Exceeding it will cause an exception with the message, "Environment: is too large (maximum is 20 KB)."</p>
/// <note>
/// <p>If you have specified one or more environment variables, you cannot modify the stack's Chef version.</p>
/// </note>
pub fn environment(&self) -> std::option::Option<&[crate::model::EnvironmentVariable]> {
self.environment.as_deref()
}
}
impl std::fmt::Debug for UpdateAppInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("UpdateAppInput");
formatter.field("app_id", &self.app_id);
formatter.field("name", &self.name);
formatter.field("description", &self.description);
formatter.field("data_sources", &self.data_sources);
formatter.field("r#type", &self.r#type);
formatter.field("app_source", &self.app_source);
formatter.field("domains", &self.domains);
formatter.field("enable_ssl", &self.enable_ssl);
formatter.field("ssl_configuration", &self.ssl_configuration);
formatter.field("attributes", &self.attributes);
formatter.field("environment", &self.environment);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct UntagResourceInput {
/// <p>The stack or layer's Amazon Resource Number (ARN).</p>
pub resource_arn: std::option::Option<std::string::String>,
/// <p>A list of the keys of tags to be removed from a stack or layer.</p>
pub tag_keys: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl UntagResourceInput {
/// <p>The stack or layer's Amazon Resource Number (ARN).</p>
pub fn resource_arn(&self) -> std::option::Option<&str> {
self.resource_arn.as_deref()
}
/// <p>A list of the keys of tags to be removed from a stack or layer.</p>
pub fn tag_keys(&self) -> std::option::Option<&[std::string::String]> {
self.tag_keys.as_deref()
}
}
impl std::fmt::Debug for UntagResourceInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("UntagResourceInput");
formatter.field("resource_arn", &self.resource_arn);
formatter.field("tag_keys", &self.tag_keys);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct UnassignVolumeInput {
/// <p>The volume ID.</p>
pub volume_id: std::option::Option<std::string::String>,
}
impl UnassignVolumeInput {
/// <p>The volume ID.</p>
pub fn volume_id(&self) -> std::option::Option<&str> {
self.volume_id.as_deref()
}
}
impl std::fmt::Debug for UnassignVolumeInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("UnassignVolumeInput");
formatter.field("volume_id", &self.volume_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct UnassignInstanceInput {
/// <p>The instance ID.</p>
pub instance_id: std::option::Option<std::string::String>,
}
impl UnassignInstanceInput {
/// <p>The instance ID.</p>
pub fn instance_id(&self) -> std::option::Option<&str> {
self.instance_id.as_deref()
}
}
impl std::fmt::Debug for UnassignInstanceInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("UnassignInstanceInput");
formatter.field("instance_id", &self.instance_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct TagResourceInput {
/// <p>The stack or layer's Amazon Resource Number (ARN).</p>
pub resource_arn: std::option::Option<std::string::String>,
/// <p>A map that contains tag keys and tag values that are attached to a stack or layer.</p>
/// <ul>
/// <li>
/// <p>The key cannot be empty.</p>
/// </li>
/// <li>
/// <p>The key can be a maximum of 127 characters, and can contain only Unicode letters, numbers, or separators, or the following special characters: <code>+ - = . _ : /</code>
/// </p>
/// </li>
/// <li>
/// <p>The value can be a maximum 255 characters, and contain only Unicode letters, numbers, or separators, or the following special characters: <code>+ - = . _ : /</code>
/// </p>
/// </li>
/// <li>
/// <p>Leading and trailing white spaces are trimmed from both the key and value.</p>
/// </li>
/// <li>
/// <p>A maximum of 40 tags is allowed for any resource.</p>
/// </li>
/// </ul>
pub tags:
std::option::Option<std::collections::HashMap<std::string::String, std::string::String>>,
}
impl TagResourceInput {
/// <p>The stack or layer's Amazon Resource Number (ARN).</p>
pub fn resource_arn(&self) -> std::option::Option<&str> {
self.resource_arn.as_deref()
}
/// <p>A map that contains tag keys and tag values that are attached to a stack or layer.</p>
/// <ul>
/// <li>
/// <p>The key cannot be empty.</p>
/// </li>
/// <li>
/// <p>The key can be a maximum of 127 characters, and can contain only Unicode letters, numbers, or separators, or the following special characters: <code>+ - = . _ : /</code>
/// </p>
/// </li>
/// <li>
/// <p>The value can be a maximum 255 characters, and contain only Unicode letters, numbers, or separators, or the following special characters: <code>+ - = . _ : /</code>
/// </p>
/// </li>
/// <li>
/// <p>Leading and trailing white spaces are trimmed from both the key and value.</p>
/// </li>
/// <li>
/// <p>A maximum of 40 tags is allowed for any resource.</p>
/// </li>
/// </ul>
pub fn tags(
&self,
) -> std::option::Option<&std::collections::HashMap<std::string::String, std::string::String>>
{
self.tags.as_ref()
}
}
impl std::fmt::Debug for TagResourceInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("TagResourceInput");
formatter.field("resource_arn", &self.resource_arn);
formatter.field("tags", &self.tags);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct StopStackInput {
/// <p>The stack ID.</p>
pub stack_id: std::option::Option<std::string::String>,
}
impl StopStackInput {
/// <p>The stack ID.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
}
impl std::fmt::Debug for StopStackInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("StopStackInput");
formatter.field("stack_id", &self.stack_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct StopInstanceInput {
/// <p>The instance ID.</p>
pub instance_id: std::option::Option<std::string::String>,
/// <p>Specifies whether to force an instance to stop. If the instance's root device type is <code>ebs</code>, or EBS-backed,
/// adding the <code>Force</code> parameter to the <code>StopInstances</code> API call disassociates the AWS OpsWorks Stacks instance from EC2, and forces deletion of <i>only</i> the OpsWorks Stacks instance.
/// You must also delete the formerly-associated instance in EC2 after troubleshooting and replacing the AWS OpsWorks Stacks instance with a new one.</p>
pub force: std::option::Option<bool>,
}
impl StopInstanceInput {
/// <p>The instance ID.</p>
pub fn instance_id(&self) -> std::option::Option<&str> {
self.instance_id.as_deref()
}
/// <p>Specifies whether to force an instance to stop. If the instance's root device type is <code>ebs</code>, or EBS-backed,
/// adding the <code>Force</code> parameter to the <code>StopInstances</code> API call disassociates the AWS OpsWorks Stacks instance from EC2, and forces deletion of <i>only</i> the OpsWorks Stacks instance.
/// You must also delete the formerly-associated instance in EC2 after troubleshooting and replacing the AWS OpsWorks Stacks instance with a new one.</p>
pub fn force(&self) -> std::option::Option<bool> {
self.force
}
}
impl std::fmt::Debug for StopInstanceInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("StopInstanceInput");
formatter.field("instance_id", &self.instance_id);
formatter.field("force", &self.force);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct StartStackInput {
/// <p>The stack ID.</p>
pub stack_id: std::option::Option<std::string::String>,
}
impl StartStackInput {
/// <p>The stack ID.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
}
impl std::fmt::Debug for StartStackInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("StartStackInput");
formatter.field("stack_id", &self.stack_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct StartInstanceInput {
/// <p>The instance ID.</p>
pub instance_id: std::option::Option<std::string::String>,
}
impl StartInstanceInput {
/// <p>The instance ID.</p>
pub fn instance_id(&self) -> std::option::Option<&str> {
self.instance_id.as_deref()
}
}
impl std::fmt::Debug for StartInstanceInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("StartInstanceInput");
formatter.field("instance_id", &self.instance_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct SetTimeBasedAutoScalingInput {
/// <p>The instance ID.</p>
pub instance_id: std::option::Option<std::string::String>,
/// <p>An <code>AutoScalingSchedule</code> with the instance schedule.</p>
pub auto_scaling_schedule: std::option::Option<crate::model::WeeklyAutoScalingSchedule>,
}
impl SetTimeBasedAutoScalingInput {
/// <p>The instance ID.</p>
pub fn instance_id(&self) -> std::option::Option<&str> {
self.instance_id.as_deref()
}
/// <p>An <code>AutoScalingSchedule</code> with the instance schedule.</p>
pub fn auto_scaling_schedule(
&self,
) -> std::option::Option<&crate::model::WeeklyAutoScalingSchedule> {
self.auto_scaling_schedule.as_ref()
}
}
impl std::fmt::Debug for SetTimeBasedAutoScalingInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("SetTimeBasedAutoScalingInput");
formatter.field("instance_id", &self.instance_id);
formatter.field("auto_scaling_schedule", &self.auto_scaling_schedule);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct SetPermissionInput {
/// <p>The stack ID.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>The user's IAM ARN. This can also be a federated user's ARN.</p>
pub iam_user_arn: std::option::Option<std::string::String>,
/// <p>The user is allowed to use SSH to communicate with the instance.</p>
pub allow_ssh: std::option::Option<bool>,
/// <p>The user is allowed to use <b>sudo</b> to elevate privileges.</p>
pub allow_sudo: std::option::Option<bool>,
/// <p>The user's permission level, which must be set to one of the following strings. You cannot set your own permissions level.</p>
/// <ul>
/// <li>
/// <p>
/// <code>deny</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>show</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>deploy</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>manage</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>iam_only</code>
/// </p>
/// </li>
/// </ul>
/// <p>For more information about the permissions associated with these levels, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html">Managing User Permissions</a>.</p>
pub level: std::option::Option<std::string::String>,
}
impl SetPermissionInput {
/// <p>The stack ID.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>The user's IAM ARN. This can also be a federated user's ARN.</p>
pub fn iam_user_arn(&self) -> std::option::Option<&str> {
self.iam_user_arn.as_deref()
}
/// <p>The user is allowed to use SSH to communicate with the instance.</p>
pub fn allow_ssh(&self) -> std::option::Option<bool> {
self.allow_ssh
}
/// <p>The user is allowed to use <b>sudo</b> to elevate privileges.</p>
pub fn allow_sudo(&self) -> std::option::Option<bool> {
self.allow_sudo
}
/// <p>The user's permission level, which must be set to one of the following strings. You cannot set your own permissions level.</p>
/// <ul>
/// <li>
/// <p>
/// <code>deny</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>show</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>deploy</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>manage</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>iam_only</code>
/// </p>
/// </li>
/// </ul>
/// <p>For more information about the permissions associated with these levels, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html">Managing User Permissions</a>.</p>
pub fn level(&self) -> std::option::Option<&str> {
self.level.as_deref()
}
}
impl std::fmt::Debug for SetPermissionInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("SetPermissionInput");
formatter.field("stack_id", &self.stack_id);
formatter.field("iam_user_arn", &self.iam_user_arn);
formatter.field("allow_ssh", &self.allow_ssh);
formatter.field("allow_sudo", &self.allow_sudo);
formatter.field("level", &self.level);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct SetLoadBasedAutoScalingInput {
/// <p>The layer ID.</p>
pub layer_id: std::option::Option<std::string::String>,
/// <p>Enables load-based auto scaling for the layer.</p>
pub enable: std::option::Option<bool>,
/// <p>An <code>AutoScalingThresholds</code> object with the upscaling threshold configuration. If
/// the load exceeds these thresholds for a specified amount of time, AWS OpsWorks Stacks starts a specified
/// number of instances.</p>
pub up_scaling: std::option::Option<crate::model::AutoScalingThresholds>,
/// <p>An <code>AutoScalingThresholds</code> object with the downscaling threshold configuration. If
/// the load falls below these thresholds for a specified amount of time, AWS OpsWorks Stacks stops a specified
/// number of instances.</p>
pub down_scaling: std::option::Option<crate::model::AutoScalingThresholds>,
}
impl SetLoadBasedAutoScalingInput {
/// <p>The layer ID.</p>
pub fn layer_id(&self) -> std::option::Option<&str> {
self.layer_id.as_deref()
}
/// <p>Enables load-based auto scaling for the layer.</p>
pub fn enable(&self) -> std::option::Option<bool> {
self.enable
}
/// <p>An <code>AutoScalingThresholds</code> object with the upscaling threshold configuration. If
/// the load exceeds these thresholds for a specified amount of time, AWS OpsWorks Stacks starts a specified
/// number of instances.</p>
pub fn up_scaling(&self) -> std::option::Option<&crate::model::AutoScalingThresholds> {
self.up_scaling.as_ref()
}
/// <p>An <code>AutoScalingThresholds</code> object with the downscaling threshold configuration. If
/// the load falls below these thresholds for a specified amount of time, AWS OpsWorks Stacks stops a specified
/// number of instances.</p>
pub fn down_scaling(&self) -> std::option::Option<&crate::model::AutoScalingThresholds> {
self.down_scaling.as_ref()
}
}
impl std::fmt::Debug for SetLoadBasedAutoScalingInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("SetLoadBasedAutoScalingInput");
formatter.field("layer_id", &self.layer_id);
formatter.field("enable", &self.enable);
formatter.field("up_scaling", &self.up_scaling);
formatter.field("down_scaling", &self.down_scaling);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct RegisterVolumeInput {
/// <p>The Amazon EBS volume ID.</p>
pub ec2_volume_id: std::option::Option<std::string::String>,
/// <p>The stack ID.</p>
pub stack_id: std::option::Option<std::string::String>,
}
impl RegisterVolumeInput {
/// <p>The Amazon EBS volume ID.</p>
pub fn ec2_volume_id(&self) -> std::option::Option<&str> {
self.ec2_volume_id.as_deref()
}
/// <p>The stack ID.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
}
impl std::fmt::Debug for RegisterVolumeInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("RegisterVolumeInput");
formatter.field("ec2_volume_id", &self.ec2_volume_id);
formatter.field("stack_id", &self.stack_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct RegisterRdsDbInstanceInput {
/// <p>The stack ID.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>The Amazon RDS instance's ARN.</p>
pub rds_db_instance_arn: std::option::Option<std::string::String>,
/// <p>The database's master user name.</p>
pub db_user: std::option::Option<std::string::String>,
/// <p>The database password.</p>
pub db_password: std::option::Option<std::string::String>,
}
impl RegisterRdsDbInstanceInput {
/// <p>The stack ID.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>The Amazon RDS instance's ARN.</p>
pub fn rds_db_instance_arn(&self) -> std::option::Option<&str> {
self.rds_db_instance_arn.as_deref()
}
/// <p>The database's master user name.</p>
pub fn db_user(&self) -> std::option::Option<&str> {
self.db_user.as_deref()
}
/// <p>The database password.</p>
pub fn db_password(&self) -> std::option::Option<&str> {
self.db_password.as_deref()
}
}
impl std::fmt::Debug for RegisterRdsDbInstanceInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("RegisterRdsDbInstanceInput");
formatter.field("stack_id", &self.stack_id);
formatter.field("rds_db_instance_arn", &self.rds_db_instance_arn);
formatter.field("db_user", &self.db_user);
formatter.field("db_password", &self.db_password);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct RegisterInstanceInput {
/// <p>The ID of the stack that the instance is to be registered with.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>The instance's hostname.</p>
pub hostname: std::option::Option<std::string::String>,
/// <p>The instance's public IP address.</p>
pub public_ip: std::option::Option<std::string::String>,
/// <p>The instance's private IP address.</p>
pub private_ip: std::option::Option<std::string::String>,
/// <p>The instances public RSA key. This key is used to encrypt communication between the instance and the service.</p>
pub rsa_public_key: std::option::Option<std::string::String>,
/// <p>The instances public RSA key fingerprint.</p>
pub rsa_public_key_fingerprint: std::option::Option<std::string::String>,
/// <p>An InstanceIdentity object that contains the instance's identity.</p>
pub instance_identity: std::option::Option<crate::model::InstanceIdentity>,
}
impl RegisterInstanceInput {
/// <p>The ID of the stack that the instance is to be registered with.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>The instance's hostname.</p>
pub fn hostname(&self) -> std::option::Option<&str> {
self.hostname.as_deref()
}
/// <p>The instance's public IP address.</p>
pub fn public_ip(&self) -> std::option::Option<&str> {
self.public_ip.as_deref()
}
/// <p>The instance's private IP address.</p>
pub fn private_ip(&self) -> std::option::Option<&str> {
self.private_ip.as_deref()
}
/// <p>The instances public RSA key. This key is used to encrypt communication between the instance and the service.</p>
pub fn rsa_public_key(&self) -> std::option::Option<&str> {
self.rsa_public_key.as_deref()
}
/// <p>The instances public RSA key fingerprint.</p>
pub fn rsa_public_key_fingerprint(&self) -> std::option::Option<&str> {
self.rsa_public_key_fingerprint.as_deref()
}
/// <p>An InstanceIdentity object that contains the instance's identity.</p>
pub fn instance_identity(&self) -> std::option::Option<&crate::model::InstanceIdentity> {
self.instance_identity.as_ref()
}
}
impl std::fmt::Debug for RegisterInstanceInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("RegisterInstanceInput");
formatter.field("stack_id", &self.stack_id);
formatter.field("hostname", &self.hostname);
formatter.field("public_ip", &self.public_ip);
formatter.field("private_ip", &self.private_ip);
formatter.field("rsa_public_key", &self.rsa_public_key);
formatter.field(
"rsa_public_key_fingerprint",
&self.rsa_public_key_fingerprint,
);
formatter.field("instance_identity", &self.instance_identity);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct RegisterElasticIpInput {
/// <p>The Elastic IP address.</p>
pub elastic_ip: std::option::Option<std::string::String>,
/// <p>The stack ID.</p>
pub stack_id: std::option::Option<std::string::String>,
}
impl RegisterElasticIpInput {
/// <p>The Elastic IP address.</p>
pub fn elastic_ip(&self) -> std::option::Option<&str> {
self.elastic_ip.as_deref()
}
/// <p>The stack ID.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
}
impl std::fmt::Debug for RegisterElasticIpInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("RegisterElasticIpInput");
formatter.field("elastic_ip", &self.elastic_ip);
formatter.field("stack_id", &self.stack_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct RegisterEcsClusterInput {
/// <p>The cluster's ARN.</p>
pub ecs_cluster_arn: std::option::Option<std::string::String>,
/// <p>The stack ID.</p>
pub stack_id: std::option::Option<std::string::String>,
}
impl RegisterEcsClusterInput {
/// <p>The cluster's ARN.</p>
pub fn ecs_cluster_arn(&self) -> std::option::Option<&str> {
self.ecs_cluster_arn.as_deref()
}
/// <p>The stack ID.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
}
impl std::fmt::Debug for RegisterEcsClusterInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("RegisterEcsClusterInput");
formatter.field("ecs_cluster_arn", &self.ecs_cluster_arn);
formatter.field("stack_id", &self.stack_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct RebootInstanceInput {
/// <p>The instance ID.</p>
pub instance_id: std::option::Option<std::string::String>,
}
impl RebootInstanceInput {
/// <p>The instance ID.</p>
pub fn instance_id(&self) -> std::option::Option<&str> {
self.instance_id.as_deref()
}
}
impl std::fmt::Debug for RebootInstanceInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("RebootInstanceInput");
formatter.field("instance_id", &self.instance_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct ListTagsInput {
/// <p>The stack or layer's Amazon Resource Number (ARN).</p>
pub resource_arn: std::option::Option<std::string::String>,
/// <p>Do not use. A validation exception occurs if you add a <code>MaxResults</code> parameter to a <code>ListTagsRequest</code> call.
/// </p>
pub max_results: i32,
/// <p>Do not use. A validation exception occurs if you add a <code>NextToken</code> parameter to a <code>ListTagsRequest</code> call.
/// </p>
pub next_token: std::option::Option<std::string::String>,
}
impl ListTagsInput {
/// <p>The stack or layer's Amazon Resource Number (ARN).</p>
pub fn resource_arn(&self) -> std::option::Option<&str> {
self.resource_arn.as_deref()
}
/// <p>Do not use. A validation exception occurs if you add a <code>MaxResults</code> parameter to a <code>ListTagsRequest</code> call.
/// </p>
pub fn max_results(&self) -> i32 {
self.max_results
}
/// <p>Do not use. A validation exception occurs if you add a <code>NextToken</code> parameter to a <code>ListTagsRequest</code> call.
/// </p>
pub fn next_token(&self) -> std::option::Option<&str> {
self.next_token.as_deref()
}
}
impl std::fmt::Debug for ListTagsInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("ListTagsInput");
formatter.field("resource_arn", &self.resource_arn);
formatter.field("max_results", &self.max_results);
formatter.field("next_token", &self.next_token);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct GrantAccessInput {
/// <p>The instance's AWS OpsWorks Stacks ID.</p>
pub instance_id: std::option::Option<std::string::String>,
/// <p>The length of time (in minutes) that the grant is valid. When the grant expires at the end of this period, the user will no longer be able to use the credentials to log in. If the user is logged in at the time, he or she automatically will be logged out.</p>
pub valid_for_in_minutes: std::option::Option<i32>,
}
impl GrantAccessInput {
/// <p>The instance's AWS OpsWorks Stacks ID.</p>
pub fn instance_id(&self) -> std::option::Option<&str> {
self.instance_id.as_deref()
}
/// <p>The length of time (in minutes) that the grant is valid. When the grant expires at the end of this period, the user will no longer be able to use the credentials to log in. If the user is logged in at the time, he or she automatically will be logged out.</p>
pub fn valid_for_in_minutes(&self) -> std::option::Option<i32> {
self.valid_for_in_minutes
}
}
impl std::fmt::Debug for GrantAccessInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("GrantAccessInput");
formatter.field("instance_id", &self.instance_id);
formatter.field("valid_for_in_minutes", &self.valid_for_in_minutes);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct GetHostnameSuggestionInput {
/// <p>The layer ID.</p>
pub layer_id: std::option::Option<std::string::String>,
}
impl GetHostnameSuggestionInput {
/// <p>The layer ID.</p>
pub fn layer_id(&self) -> std::option::Option<&str> {
self.layer_id.as_deref()
}
}
impl std::fmt::Debug for GetHostnameSuggestionInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("GetHostnameSuggestionInput");
formatter.field("layer_id", &self.layer_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DisassociateElasticIpInput {
/// <p>The Elastic IP address.</p>
pub elastic_ip: std::option::Option<std::string::String>,
}
impl DisassociateElasticIpInput {
/// <p>The Elastic IP address.</p>
pub fn elastic_ip(&self) -> std::option::Option<&str> {
self.elastic_ip.as_deref()
}
}
impl std::fmt::Debug for DisassociateElasticIpInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DisassociateElasticIpInput");
formatter.field("elastic_ip", &self.elastic_ip);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DetachElasticLoadBalancerInput {
/// <p>The Elastic Load Balancing instance's name.</p>
pub elastic_load_balancer_name: std::option::Option<std::string::String>,
/// <p>The ID of the layer that the Elastic Load Balancing instance is attached to.</p>
pub layer_id: std::option::Option<std::string::String>,
}
impl DetachElasticLoadBalancerInput {
/// <p>The Elastic Load Balancing instance's name.</p>
pub fn elastic_load_balancer_name(&self) -> std::option::Option<&str> {
self.elastic_load_balancer_name.as_deref()
}
/// <p>The ID of the layer that the Elastic Load Balancing instance is attached to.</p>
pub fn layer_id(&self) -> std::option::Option<&str> {
self.layer_id.as_deref()
}
}
impl std::fmt::Debug for DetachElasticLoadBalancerInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DetachElasticLoadBalancerInput");
formatter.field(
"elastic_load_balancer_name",
&self.elastic_load_balancer_name,
);
formatter.field("layer_id", &self.layer_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeVolumesInput {
/// <p>The instance ID. If you use this parameter, <code>DescribeVolumes</code> returns descriptions
/// of the volumes associated with the specified instance.</p>
pub instance_id: std::option::Option<std::string::String>,
/// <p>A stack ID. The action describes the stack's registered Amazon EBS volumes.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>The RAID array ID. If you use this parameter, <code>DescribeVolumes</code> returns
/// descriptions of the volumes associated with the specified RAID array.</p>
pub raid_array_id: std::option::Option<std::string::String>,
/// <p>Am array of volume IDs. If you use this parameter, <code>DescribeVolumes</code> returns
/// descriptions of the specified volumes. Otherwise, it returns a description of every
/// volume.</p>
pub volume_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl DescribeVolumesInput {
/// <p>The instance ID. If you use this parameter, <code>DescribeVolumes</code> returns descriptions
/// of the volumes associated with the specified instance.</p>
pub fn instance_id(&self) -> std::option::Option<&str> {
self.instance_id.as_deref()
}
/// <p>A stack ID. The action describes the stack's registered Amazon EBS volumes.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>The RAID array ID. If you use this parameter, <code>DescribeVolumes</code> returns
/// descriptions of the volumes associated with the specified RAID array.</p>
pub fn raid_array_id(&self) -> std::option::Option<&str> {
self.raid_array_id.as_deref()
}
/// <p>Am array of volume IDs. If you use this parameter, <code>DescribeVolumes</code> returns
/// descriptions of the specified volumes. Otherwise, it returns a description of every
/// volume.</p>
pub fn volume_ids(&self) -> std::option::Option<&[std::string::String]> {
self.volume_ids.as_deref()
}
}
impl std::fmt::Debug for DescribeVolumesInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeVolumesInput");
formatter.field("instance_id", &self.instance_id);
formatter.field("stack_id", &self.stack_id);
formatter.field("raid_array_id", &self.raid_array_id);
formatter.field("volume_ids", &self.volume_ids);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeUserProfilesInput {
/// <p>An array of IAM or federated user ARNs that identify the users to be described.</p>
pub iam_user_arns: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl DescribeUserProfilesInput {
/// <p>An array of IAM or federated user ARNs that identify the users to be described.</p>
pub fn iam_user_arns(&self) -> std::option::Option<&[std::string::String]> {
self.iam_user_arns.as_deref()
}
}
impl std::fmt::Debug for DescribeUserProfilesInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeUserProfilesInput");
formatter.field("iam_user_arns", &self.iam_user_arns);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeTimeBasedAutoScalingInput {
/// <p>An array of instance IDs.</p>
pub instance_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl DescribeTimeBasedAutoScalingInput {
/// <p>An array of instance IDs.</p>
pub fn instance_ids(&self) -> std::option::Option<&[std::string::String]> {
self.instance_ids.as_deref()
}
}
impl std::fmt::Debug for DescribeTimeBasedAutoScalingInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeTimeBasedAutoScalingInput");
formatter.field("instance_ids", &self.instance_ids);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeStackSummaryInput {
/// <p>The stack ID.</p>
pub stack_id: std::option::Option<std::string::String>,
}
impl DescribeStackSummaryInput {
/// <p>The stack ID.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
}
impl std::fmt::Debug for DescribeStackSummaryInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeStackSummaryInput");
formatter.field("stack_id", &self.stack_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeStacksInput {
/// <p>An array of stack IDs that specify the stacks to be described. If you omit this parameter,
/// <code>DescribeStacks</code> returns a description of every stack.</p>
pub stack_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl DescribeStacksInput {
/// <p>An array of stack IDs that specify the stacks to be described. If you omit this parameter,
/// <code>DescribeStacks</code> returns a description of every stack.</p>
pub fn stack_ids(&self) -> std::option::Option<&[std::string::String]> {
self.stack_ids.as_deref()
}
}
impl std::fmt::Debug for DescribeStacksInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeStacksInput");
formatter.field("stack_ids", &self.stack_ids);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeStackProvisioningParametersInput {
/// <p>The stack ID.</p>
pub stack_id: std::option::Option<std::string::String>,
}
impl DescribeStackProvisioningParametersInput {
/// <p>The stack ID.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
}
impl std::fmt::Debug for DescribeStackProvisioningParametersInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeStackProvisioningParametersInput");
formatter.field("stack_id", &self.stack_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeServiceErrorsInput {
/// <p>The stack ID. If you use this parameter, <code>DescribeServiceErrors</code> returns
/// descriptions of the errors associated with the specified stack.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>The instance ID. If you use this parameter, <code>DescribeServiceErrors</code> returns
/// descriptions of the errors associated with the specified instance.</p>
pub instance_id: std::option::Option<std::string::String>,
/// <p>An array of service error IDs. If you use this parameter, <code>DescribeServiceErrors</code>
/// returns descriptions of the specified errors. Otherwise, it returns a description of every
/// error.</p>
pub service_error_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl DescribeServiceErrorsInput {
/// <p>The stack ID. If you use this parameter, <code>DescribeServiceErrors</code> returns
/// descriptions of the errors associated with the specified stack.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>The instance ID. If you use this parameter, <code>DescribeServiceErrors</code> returns
/// descriptions of the errors associated with the specified instance.</p>
pub fn instance_id(&self) -> std::option::Option<&str> {
self.instance_id.as_deref()
}
/// <p>An array of service error IDs. If you use this parameter, <code>DescribeServiceErrors</code>
/// returns descriptions of the specified errors. Otherwise, it returns a description of every
/// error.</p>
pub fn service_error_ids(&self) -> std::option::Option<&[std::string::String]> {
self.service_error_ids.as_deref()
}
}
impl std::fmt::Debug for DescribeServiceErrorsInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeServiceErrorsInput");
formatter.field("stack_id", &self.stack_id);
formatter.field("instance_id", &self.instance_id);
formatter.field("service_error_ids", &self.service_error_ids);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeRdsDbInstancesInput {
/// <p>The ID of the stack with which the instances are registered. The operation returns descriptions of all registered Amazon RDS instances.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>An array containing the ARNs of the instances to be described.</p>
pub rds_db_instance_arns: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl DescribeRdsDbInstancesInput {
/// <p>The ID of the stack with which the instances are registered. The operation returns descriptions of all registered Amazon RDS instances.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>An array containing the ARNs of the instances to be described.</p>
pub fn rds_db_instance_arns(&self) -> std::option::Option<&[std::string::String]> {
self.rds_db_instance_arns.as_deref()
}
}
impl std::fmt::Debug for DescribeRdsDbInstancesInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeRdsDbInstancesInput");
formatter.field("stack_id", &self.stack_id);
formatter.field("rds_db_instance_arns", &self.rds_db_instance_arns);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeRaidArraysInput {
/// <p>The instance ID. If you use this parameter, <code>DescribeRaidArrays</code> returns
/// descriptions of the RAID arrays associated with the specified instance. </p>
pub instance_id: std::option::Option<std::string::String>,
/// <p>The stack ID.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>An array of RAID array IDs. If you use this parameter, <code>DescribeRaidArrays</code>
/// returns descriptions of the specified arrays. Otherwise, it returns a description of every
/// array.</p>
pub raid_array_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl DescribeRaidArraysInput {
/// <p>The instance ID. If you use this parameter, <code>DescribeRaidArrays</code> returns
/// descriptions of the RAID arrays associated with the specified instance. </p>
pub fn instance_id(&self) -> std::option::Option<&str> {
self.instance_id.as_deref()
}
/// <p>The stack ID.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>An array of RAID array IDs. If you use this parameter, <code>DescribeRaidArrays</code>
/// returns descriptions of the specified arrays. Otherwise, it returns a description of every
/// array.</p>
pub fn raid_array_ids(&self) -> std::option::Option<&[std::string::String]> {
self.raid_array_ids.as_deref()
}
}
impl std::fmt::Debug for DescribeRaidArraysInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeRaidArraysInput");
formatter.field("instance_id", &self.instance_id);
formatter.field("stack_id", &self.stack_id);
formatter.field("raid_array_ids", &self.raid_array_ids);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribePermissionsInput {
/// <p>The user's IAM ARN. This can also be a federated user's ARN. For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub iam_user_arn: std::option::Option<std::string::String>,
/// <p>The stack ID.</p>
pub stack_id: std::option::Option<std::string::String>,
}
impl DescribePermissionsInput {
/// <p>The user's IAM ARN. This can also be a federated user's ARN. For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub fn iam_user_arn(&self) -> std::option::Option<&str> {
self.iam_user_arn.as_deref()
}
/// <p>The stack ID.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
}
impl std::fmt::Debug for DescribePermissionsInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribePermissionsInput");
formatter.field("iam_user_arn", &self.iam_user_arn);
formatter.field("stack_id", &self.stack_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeOperatingSystemsInput {}
impl std::fmt::Debug for DescribeOperatingSystemsInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeOperatingSystemsInput");
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeMyUserProfileInput {}
impl std::fmt::Debug for DescribeMyUserProfileInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeMyUserProfileInput");
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeLoadBasedAutoScalingInput {
/// <p>An array of layer IDs.</p>
pub layer_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl DescribeLoadBasedAutoScalingInput {
/// <p>An array of layer IDs.</p>
pub fn layer_ids(&self) -> std::option::Option<&[std::string::String]> {
self.layer_ids.as_deref()
}
}
impl std::fmt::Debug for DescribeLoadBasedAutoScalingInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeLoadBasedAutoScalingInput");
formatter.field("layer_ids", &self.layer_ids);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeLayersInput {
/// <p>The stack ID.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>An array of layer IDs that specify the layers to be described. If you omit this parameter,
/// <code>DescribeLayers</code> returns a description of every layer in the specified stack.</p>
pub layer_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl DescribeLayersInput {
/// <p>The stack ID.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>An array of layer IDs that specify the layers to be described. If you omit this parameter,
/// <code>DescribeLayers</code> returns a description of every layer in the specified stack.</p>
pub fn layer_ids(&self) -> std::option::Option<&[std::string::String]> {
self.layer_ids.as_deref()
}
}
impl std::fmt::Debug for DescribeLayersInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeLayersInput");
formatter.field("stack_id", &self.stack_id);
formatter.field("layer_ids", &self.layer_ids);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeInstancesInput {
/// <p>A stack ID. If you use this parameter, <code>DescribeInstances</code> returns descriptions of
/// the instances associated with the specified stack.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>A layer ID. If you use this parameter, <code>DescribeInstances</code> returns descriptions of
/// the instances associated with the specified layer.</p>
pub layer_id: std::option::Option<std::string::String>,
/// <p>An array of instance IDs to be described. If you use this parameter,
/// <code>DescribeInstances</code> returns a description of the specified instances. Otherwise,
/// it returns a description of every instance.</p>
pub instance_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl DescribeInstancesInput {
/// <p>A stack ID. If you use this parameter, <code>DescribeInstances</code> returns descriptions of
/// the instances associated with the specified stack.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>A layer ID. If you use this parameter, <code>DescribeInstances</code> returns descriptions of
/// the instances associated with the specified layer.</p>
pub fn layer_id(&self) -> std::option::Option<&str> {
self.layer_id.as_deref()
}
/// <p>An array of instance IDs to be described. If you use this parameter,
/// <code>DescribeInstances</code> returns a description of the specified instances. Otherwise,
/// it returns a description of every instance.</p>
pub fn instance_ids(&self) -> std::option::Option<&[std::string::String]> {
self.instance_ids.as_deref()
}
}
impl std::fmt::Debug for DescribeInstancesInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeInstancesInput");
formatter.field("stack_id", &self.stack_id);
formatter.field("layer_id", &self.layer_id);
formatter.field("instance_ids", &self.instance_ids);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeElasticLoadBalancersInput {
/// <p>A stack ID. The action describes the stack's Elastic Load Balancing instances.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>A list of layer IDs. The action describes the Elastic Load Balancing instances for the specified layers.</p>
pub layer_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl DescribeElasticLoadBalancersInput {
/// <p>A stack ID. The action describes the stack's Elastic Load Balancing instances.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>A list of layer IDs. The action describes the Elastic Load Balancing instances for the specified layers.</p>
pub fn layer_ids(&self) -> std::option::Option<&[std::string::String]> {
self.layer_ids.as_deref()
}
}
impl std::fmt::Debug for DescribeElasticLoadBalancersInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeElasticLoadBalancersInput");
formatter.field("stack_id", &self.stack_id);
formatter.field("layer_ids", &self.layer_ids);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeElasticIpsInput {
/// <p>The instance ID. If you include this parameter, <code>DescribeElasticIps</code> returns a
/// description of the Elastic IP addresses associated with the specified instance.</p>
pub instance_id: std::option::Option<std::string::String>,
/// <p>A stack ID. If you include this parameter, <code>DescribeElasticIps</code> returns a
/// description of the Elastic IP addresses that are registered with the specified stack.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>An array of Elastic IP addresses to be described. If you include this parameter,
/// <code>DescribeElasticIps</code> returns a description of the specified Elastic IP addresses.
/// Otherwise, it returns a description of every Elastic IP address.</p>
pub ips: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl DescribeElasticIpsInput {
/// <p>The instance ID. If you include this parameter, <code>DescribeElasticIps</code> returns a
/// description of the Elastic IP addresses associated with the specified instance.</p>
pub fn instance_id(&self) -> std::option::Option<&str> {
self.instance_id.as_deref()
}
/// <p>A stack ID. If you include this parameter, <code>DescribeElasticIps</code> returns a
/// description of the Elastic IP addresses that are registered with the specified stack.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>An array of Elastic IP addresses to be described. If you include this parameter,
/// <code>DescribeElasticIps</code> returns a description of the specified Elastic IP addresses.
/// Otherwise, it returns a description of every Elastic IP address.</p>
pub fn ips(&self) -> std::option::Option<&[std::string::String]> {
self.ips.as_deref()
}
}
impl std::fmt::Debug for DescribeElasticIpsInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeElasticIpsInput");
formatter.field("instance_id", &self.instance_id);
formatter.field("stack_id", &self.stack_id);
formatter.field("ips", &self.ips);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeEcsClustersInput {
/// <p>A list of ARNs, one for each cluster to be described.</p>
pub ecs_cluster_arns: std::option::Option<std::vec::Vec<std::string::String>>,
/// <p>A stack ID.
/// <code>DescribeEcsClusters</code> returns a description of the cluster that is registered with the stack.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>If the previous paginated request did not return all of the remaining results,
/// the response object's<code>NextToken</code> parameter value is set to a token.
/// To retrieve the next set of results, call <code>DescribeEcsClusters</code>
/// again and assign that token to the request object's <code>NextToken</code> parameter.
/// If there are no remaining results, the previous response
/// object's <code>NextToken</code> parameter is set to <code>null</code>.</p>
pub next_token: std::option::Option<std::string::String>,
/// <p>To receive a paginated response, use this parameter to specify the maximum number
/// of results to be returned with a single call. If the number of available results exceeds this maximum, the
/// response includes a <code>NextToken</code> value that you can assign
/// to the <code>NextToken</code> request parameter to get the next set of results.</p>
pub max_results: std::option::Option<i32>,
}
impl DescribeEcsClustersInput {
/// <p>A list of ARNs, one for each cluster to be described.</p>
pub fn ecs_cluster_arns(&self) -> std::option::Option<&[std::string::String]> {
self.ecs_cluster_arns.as_deref()
}
/// <p>A stack ID.
/// <code>DescribeEcsClusters</code> returns a description of the cluster that is registered with the stack.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>If the previous paginated request did not return all of the remaining results,
/// the response object's<code>NextToken</code> parameter value is set to a token.
/// To retrieve the next set of results, call <code>DescribeEcsClusters</code>
/// again and assign that token to the request object's <code>NextToken</code> parameter.
/// If there are no remaining results, the previous response
/// object's <code>NextToken</code> parameter is set to <code>null</code>.</p>
pub fn next_token(&self) -> std::option::Option<&str> {
self.next_token.as_deref()
}
/// <p>To receive a paginated response, use this parameter to specify the maximum number
/// of results to be returned with a single call. If the number of available results exceeds this maximum, the
/// response includes a <code>NextToken</code> value that you can assign
/// to the <code>NextToken</code> request parameter to get the next set of results.</p>
pub fn max_results(&self) -> std::option::Option<i32> {
self.max_results
}
}
impl std::fmt::Debug for DescribeEcsClustersInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeEcsClustersInput");
formatter.field("ecs_cluster_arns", &self.ecs_cluster_arns);
formatter.field("stack_id", &self.stack_id);
formatter.field("next_token", &self.next_token);
formatter.field("max_results", &self.max_results);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeDeploymentsInput {
/// <p>The stack ID. If you include this parameter, the command returns a
/// description of the commands associated with the specified stack.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>The app ID. If you include this parameter, the command returns a
/// description of the commands associated with the specified app.</p>
pub app_id: std::option::Option<std::string::String>,
/// <p>An array of deployment IDs to be described. If you include this parameter,
/// the command returns a description of the specified deployments.
/// Otherwise, it returns a description of every deployment.</p>
pub deployment_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl DescribeDeploymentsInput {
/// <p>The stack ID. If you include this parameter, the command returns a
/// description of the commands associated with the specified stack.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>The app ID. If you include this parameter, the command returns a
/// description of the commands associated with the specified app.</p>
pub fn app_id(&self) -> std::option::Option<&str> {
self.app_id.as_deref()
}
/// <p>An array of deployment IDs to be described. If you include this parameter,
/// the command returns a description of the specified deployments.
/// Otherwise, it returns a description of every deployment.</p>
pub fn deployment_ids(&self) -> std::option::Option<&[std::string::String]> {
self.deployment_ids.as_deref()
}
}
impl std::fmt::Debug for DescribeDeploymentsInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeDeploymentsInput");
formatter.field("stack_id", &self.stack_id);
formatter.field("app_id", &self.app_id);
formatter.field("deployment_ids", &self.deployment_ids);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeCommandsInput {
/// <p>The deployment ID. If you include this parameter, <code>DescribeCommands</code> returns a
/// description of the commands associated with the specified deployment.</p>
pub deployment_id: std::option::Option<std::string::String>,
/// <p>The instance ID. If you include this parameter, <code>DescribeCommands</code> returns a
/// description of the commands associated with the specified instance.</p>
pub instance_id: std::option::Option<std::string::String>,
/// <p>An array of command IDs. If you include this parameter, <code>DescribeCommands</code> returns
/// a description of the specified commands. Otherwise, it returns a description of every
/// command.</p>
pub command_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl DescribeCommandsInput {
/// <p>The deployment ID. If you include this parameter, <code>DescribeCommands</code> returns a
/// description of the commands associated with the specified deployment.</p>
pub fn deployment_id(&self) -> std::option::Option<&str> {
self.deployment_id.as_deref()
}
/// <p>The instance ID. If you include this parameter, <code>DescribeCommands</code> returns a
/// description of the commands associated with the specified instance.</p>
pub fn instance_id(&self) -> std::option::Option<&str> {
self.instance_id.as_deref()
}
/// <p>An array of command IDs. If you include this parameter, <code>DescribeCommands</code> returns
/// a description of the specified commands. Otherwise, it returns a description of every
/// command.</p>
pub fn command_ids(&self) -> std::option::Option<&[std::string::String]> {
self.command_ids.as_deref()
}
}
impl std::fmt::Debug for DescribeCommandsInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeCommandsInput");
formatter.field("deployment_id", &self.deployment_id);
formatter.field("instance_id", &self.instance_id);
formatter.field("command_ids", &self.command_ids);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeAppsInput {
/// <p>The app stack ID. If you use this parameter, <code>DescribeApps</code> returns a description
/// of the apps in the specified stack.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>An array of app IDs for the apps to be described. If you use this parameter,
/// <code>DescribeApps</code> returns a description of the specified apps. Otherwise, it returns
/// a description of every app.</p>
pub app_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl DescribeAppsInput {
/// <p>The app stack ID. If you use this parameter, <code>DescribeApps</code> returns a description
/// of the apps in the specified stack.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>An array of app IDs for the apps to be described. If you use this parameter,
/// <code>DescribeApps</code> returns a description of the specified apps. Otherwise, it returns
/// a description of every app.</p>
pub fn app_ids(&self) -> std::option::Option<&[std::string::String]> {
self.app_ids.as_deref()
}
}
impl std::fmt::Debug for DescribeAppsInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeAppsInput");
formatter.field("stack_id", &self.stack_id);
formatter.field("app_ids", &self.app_ids);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DescribeAgentVersionsInput {
/// <p>The stack ID.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>The configuration manager.</p>
pub configuration_manager: std::option::Option<crate::model::StackConfigurationManager>,
}
impl DescribeAgentVersionsInput {
/// <p>The stack ID.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>The configuration manager.</p>
pub fn configuration_manager(
&self,
) -> std::option::Option<&crate::model::StackConfigurationManager> {
self.configuration_manager.as_ref()
}
}
impl std::fmt::Debug for DescribeAgentVersionsInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DescribeAgentVersionsInput");
formatter.field("stack_id", &self.stack_id);
formatter.field("configuration_manager", &self.configuration_manager);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DeregisterVolumeInput {
/// <p>The AWS OpsWorks Stacks volume ID, which is the GUID that AWS OpsWorks Stacks assigned to the instance when you registered the volume with the stack, not the Amazon EC2 volume ID.</p>
pub volume_id: std::option::Option<std::string::String>,
}
impl DeregisterVolumeInput {
/// <p>The AWS OpsWorks Stacks volume ID, which is the GUID that AWS OpsWorks Stacks assigned to the instance when you registered the volume with the stack, not the Amazon EC2 volume ID.</p>
pub fn volume_id(&self) -> std::option::Option<&str> {
self.volume_id.as_deref()
}
}
impl std::fmt::Debug for DeregisterVolumeInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DeregisterVolumeInput");
formatter.field("volume_id", &self.volume_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DeregisterRdsDbInstanceInput {
/// <p>The Amazon RDS instance's ARN.</p>
pub rds_db_instance_arn: std::option::Option<std::string::String>,
}
impl DeregisterRdsDbInstanceInput {
/// <p>The Amazon RDS instance's ARN.</p>
pub fn rds_db_instance_arn(&self) -> std::option::Option<&str> {
self.rds_db_instance_arn.as_deref()
}
}
impl std::fmt::Debug for DeregisterRdsDbInstanceInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DeregisterRdsDbInstanceInput");
formatter.field("rds_db_instance_arn", &self.rds_db_instance_arn);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DeregisterInstanceInput {
/// <p>The instance ID.</p>
pub instance_id: std::option::Option<std::string::String>,
}
impl DeregisterInstanceInput {
/// <p>The instance ID.</p>
pub fn instance_id(&self) -> std::option::Option<&str> {
self.instance_id.as_deref()
}
}
impl std::fmt::Debug for DeregisterInstanceInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DeregisterInstanceInput");
formatter.field("instance_id", &self.instance_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DeregisterElasticIpInput {
/// <p>The Elastic IP address.</p>
pub elastic_ip: std::option::Option<std::string::String>,
}
impl DeregisterElasticIpInput {
/// <p>The Elastic IP address.</p>
pub fn elastic_ip(&self) -> std::option::Option<&str> {
self.elastic_ip.as_deref()
}
}
impl std::fmt::Debug for DeregisterElasticIpInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DeregisterElasticIpInput");
formatter.field("elastic_ip", &self.elastic_ip);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DeregisterEcsClusterInput {
/// <p>The cluster's Amazon Resource Number (ARN).</p>
pub ecs_cluster_arn: std::option::Option<std::string::String>,
}
impl DeregisterEcsClusterInput {
/// <p>The cluster's Amazon Resource Number (ARN).</p>
pub fn ecs_cluster_arn(&self) -> std::option::Option<&str> {
self.ecs_cluster_arn.as_deref()
}
}
impl std::fmt::Debug for DeregisterEcsClusterInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DeregisterEcsClusterInput");
formatter.field("ecs_cluster_arn", &self.ecs_cluster_arn);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DeleteUserProfileInput {
/// <p>The user's IAM ARN. This can also be a federated user's ARN.</p>
pub iam_user_arn: std::option::Option<std::string::String>,
}
impl DeleteUserProfileInput {
/// <p>The user's IAM ARN. This can also be a federated user's ARN.</p>
pub fn iam_user_arn(&self) -> std::option::Option<&str> {
self.iam_user_arn.as_deref()
}
}
impl std::fmt::Debug for DeleteUserProfileInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DeleteUserProfileInput");
formatter.field("iam_user_arn", &self.iam_user_arn);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DeleteStackInput {
/// <p>The stack ID.</p>
pub stack_id: std::option::Option<std::string::String>,
}
impl DeleteStackInput {
/// <p>The stack ID.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
}
impl std::fmt::Debug for DeleteStackInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DeleteStackInput");
formatter.field("stack_id", &self.stack_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DeleteLayerInput {
/// <p>The layer ID.</p>
pub layer_id: std::option::Option<std::string::String>,
}
impl DeleteLayerInput {
/// <p>The layer ID.</p>
pub fn layer_id(&self) -> std::option::Option<&str> {
self.layer_id.as_deref()
}
}
impl std::fmt::Debug for DeleteLayerInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DeleteLayerInput");
formatter.field("layer_id", &self.layer_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DeleteInstanceInput {
/// <p>The instance ID.</p>
pub instance_id: std::option::Option<std::string::String>,
/// <p>Whether to delete the instance Elastic IP address.</p>
pub delete_elastic_ip: std::option::Option<bool>,
/// <p>Whether to delete the instance's Amazon EBS volumes.</p>
pub delete_volumes: std::option::Option<bool>,
}
impl DeleteInstanceInput {
/// <p>The instance ID.</p>
pub fn instance_id(&self) -> std::option::Option<&str> {
self.instance_id.as_deref()
}
/// <p>Whether to delete the instance Elastic IP address.</p>
pub fn delete_elastic_ip(&self) -> std::option::Option<bool> {
self.delete_elastic_ip
}
/// <p>Whether to delete the instance's Amazon EBS volumes.</p>
pub fn delete_volumes(&self) -> std::option::Option<bool> {
self.delete_volumes
}
}
impl std::fmt::Debug for DeleteInstanceInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DeleteInstanceInput");
formatter.field("instance_id", &self.instance_id);
formatter.field("delete_elastic_ip", &self.delete_elastic_ip);
formatter.field("delete_volumes", &self.delete_volumes);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DeleteAppInput {
/// <p>The app ID.</p>
pub app_id: std::option::Option<std::string::String>,
}
impl DeleteAppInput {
/// <p>The app ID.</p>
pub fn app_id(&self) -> std::option::Option<&str> {
self.app_id.as_deref()
}
}
impl std::fmt::Debug for DeleteAppInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("DeleteAppInput");
formatter.field("app_id", &self.app_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct CreateUserProfileInput {
/// <p>The user's IAM ARN; this can also be a federated user's ARN.</p>
pub iam_user_arn: std::option::Option<std::string::String>,
/// <p>The user's SSH user name. The allowable characters are [a-z], [A-Z], [0-9], '-', and '_'. If
/// the specified name includes other punctuation marks, AWS OpsWorks Stacks removes them. For example,
/// <code>my.name</code> will be changed to <code>myname</code>. If you do not specify an SSH
/// user name, AWS OpsWorks Stacks generates one from the IAM user name. </p>
pub ssh_username: std::option::Option<std::string::String>,
/// <p>The user's public SSH key.</p>
pub ssh_public_key: std::option::Option<std::string::String>,
/// <p>Whether users can specify their own SSH public key through the My Settings page. For more
/// information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-settingsshkey.html">Setting an IAM
/// User's Public SSH Key</a>.</p>
pub allow_self_management: std::option::Option<bool>,
}
impl CreateUserProfileInput {
/// <p>The user's IAM ARN; this can also be a federated user's ARN.</p>
pub fn iam_user_arn(&self) -> std::option::Option<&str> {
self.iam_user_arn.as_deref()
}
/// <p>The user's SSH user name. The allowable characters are [a-z], [A-Z], [0-9], '-', and '_'. If
/// the specified name includes other punctuation marks, AWS OpsWorks Stacks removes them. For example,
/// <code>my.name</code> will be changed to <code>myname</code>. If you do not specify an SSH
/// user name, AWS OpsWorks Stacks generates one from the IAM user name. </p>
pub fn ssh_username(&self) -> std::option::Option<&str> {
self.ssh_username.as_deref()
}
/// <p>The user's public SSH key.</p>
pub fn ssh_public_key(&self) -> std::option::Option<&str> {
self.ssh_public_key.as_deref()
}
/// <p>Whether users can specify their own SSH public key through the My Settings page. For more
/// information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-settingsshkey.html">Setting an IAM
/// User's Public SSH Key</a>.</p>
pub fn allow_self_management(&self) -> std::option::Option<bool> {
self.allow_self_management
}
}
impl std::fmt::Debug for CreateUserProfileInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("CreateUserProfileInput");
formatter.field("iam_user_arn", &self.iam_user_arn);
formatter.field("ssh_username", &self.ssh_username);
formatter.field("ssh_public_key", &self.ssh_public_key);
formatter.field("allow_self_management", &self.allow_self_management);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct CreateStackInput {
/// <p>The stack name.</p>
pub name: std::option::Option<std::string::String>,
/// <p>The stack's AWS region, such as <code>ap-south-1</code>. For more information about
/// Amazon regions, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and Endpoints</a>.</p>
/// <note>
/// <p>In the AWS CLI, this API maps to the <code>--stack-region</code> parameter. If the
/// <code>--stack-region</code> parameter and the AWS CLI common parameter
/// <code>--region</code> are set to the same value, the stack uses a
/// <i>regional</i> endpoint. If the <code>--stack-region</code>
/// parameter is not set, but the AWS CLI <code>--region</code> parameter is, this also
/// results in a stack with a <i>regional</i> endpoint. However, if the
/// <code>--region</code> parameter is set to <code>us-east-1</code>, and the
/// <code>--stack-region</code> parameter is set to one of the following, then the
/// stack uses a legacy or <i>classic</i> region: <code>us-west-1,
/// us-west-2, sa-east-1, eu-central-1, eu-west-1, ap-northeast-1, ap-southeast-1,
/// ap-southeast-2</code>. In this case, the actual API endpoint of the stack is in
/// <code>us-east-1</code>. Only the preceding regions are supported as classic
/// regions in the <code>us-east-1</code> API endpoint. Because it is a best practice to
/// choose the regional endpoint that is closest to where you manage AWS, we recommend
/// that you use regional endpoints for new stacks. The AWS CLI common
/// <code>--region</code> parameter always specifies a regional API endpoint; it
/// cannot be used to specify a classic AWS OpsWorks Stacks region.</p>
/// </note>
pub region: std::option::Option<std::string::String>,
/// <p>The ID of the VPC that the stack is to be launched into. The VPC must be in the stack's region. All instances are launched into this VPC. You cannot change the ID later.</p>
/// <ul>
/// <li>
/// <p>If your account supports EC2-Classic, the default value is <code>no VPC</code>.</p>
/// </li>
/// <li>
/// <p>If your account does not support EC2-Classic, the default value is the default VPC for the specified region.</p>
/// </li>
/// </ul>
/// <p>If the VPC ID corresponds to a default VPC and you have specified either the
/// <code>DefaultAvailabilityZone</code> or the <code>DefaultSubnetId</code> parameter only,
/// AWS OpsWorks Stacks infers the value of the
/// other parameter. If you specify neither parameter, AWS OpsWorks Stacks sets
/// these parameters to the first valid Availability Zone for the specified region and the
/// corresponding default VPC subnet ID, respectively.</p>
/// <p>If you specify a nondefault VPC ID, note the following:</p>
/// <ul>
/// <li>
/// <p>It must belong to a VPC in your account that is in the specified region.</p>
/// </li>
/// <li>
/// <p>You must specify a value for <code>DefaultSubnetId</code>.</p>
/// </li>
/// </ul>
/// <p>For more information about how to use AWS OpsWorks Stacks with a VPC, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-vpc.html">Running a Stack in a
/// VPC</a>. For more information about default VPC and EC2-Classic, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html">Supported
/// Platforms</a>. </p>
pub vpc_id: std::option::Option<std::string::String>,
/// <p>One or more user-defined key-value pairs to be added to the stack attributes.</p>
pub attributes: std::option::Option<
std::collections::HashMap<crate::model::StackAttributesKeys, std::string::String>,
>,
/// <p>The stack's AWS Identity and Access Management (IAM) role, which allows AWS OpsWorks Stacks to work with AWS
/// resources on your behalf. You must set this parameter to the Amazon Resource Name (ARN) for an
/// existing IAM role. For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub service_role_arn: std::option::Option<std::string::String>,
/// <p>The Amazon Resource Name (ARN) of an IAM profile that is the default profile for all of the stack's EC2 instances.
/// For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub default_instance_profile_arn: std::option::Option<std::string::String>,
/// <p>The stack's default operating system, which is installed on every instance unless you specify a different operating system when you create the instance. You can specify one of the following.</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux 2016.09</code>,
/// <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>A supported Windows operating system, such as <code>Microsoft Windows Server 2012 R2 Base</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// <li>
/// <p>A custom AMI: <code>Custom</code>. You specify the custom AMI you want to use when
/// you create instances. For more
/// information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">
/// Using Custom AMIs</a>.</p>
/// </li>
/// </ul>
/// <p>The default option is the current Amazon Linux version.
/// For more information about supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
pub default_os: std::option::Option<std::string::String>,
/// <p>The stack's host name theme, with spaces replaced by underscores. The theme is used to
/// generate host names for the stack's instances. By default, <code>HostnameTheme</code> is set
/// to <code>Layer_Dependent</code>, which creates host names by appending integers to the layer's
/// short name. The other themes are:</p>
/// <ul>
/// <li>
/// <p>
/// <code>Baked_Goods</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Clouds</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Europe_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Fruits</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Greek_Deities_and_Titans</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Legendary_creatures_from_Japan</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Planets_and_Moons</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Roman_Deities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Scottish_Islands</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>US_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Wild_Cats</code>
/// </p>
/// </li>
/// </ul>
/// <p>To obtain a generated host name, call <code>GetHostNameSuggestion</code>, which returns a
/// host name based on the current theme.</p>
pub hostname_theme: std::option::Option<std::string::String>,
/// <p>The stack's default Availability Zone, which must be in the specified region. For more
/// information, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and
/// Endpoints</a>. If you also specify a value for <code>DefaultSubnetId</code>, the subnet must
/// be in the same zone. For more information, see the <code>VpcId</code> parameter description.
/// </p>
pub default_availability_zone: std::option::Option<std::string::String>,
/// <p>The stack's default VPC subnet ID. This parameter is required if you specify a value for the
/// <code>VpcId</code> parameter. All instances are launched into this subnet unless you specify
/// otherwise when you create the instance. If you also specify a value for
/// <code>DefaultAvailabilityZone</code>, the subnet must be in that zone. For information on
/// default values and when this parameter is required, see the <code>VpcId</code> parameter
/// description. </p>
pub default_subnet_id: std::option::Option<std::string::String>,
/// <p>A string that contains user-defined, custom JSON. It can be used to override the corresponding default stack configuration attribute values or to pass data to recipes. The string should be in the following format:</p>
/// <p>
/// <code>"{\"key1\": \"value1\", \"key2\": \"value2\",...}"</code>
/// </p>
/// <p>For more information about custom JSON, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html">Use Custom JSON to
/// Modify the Stack Configuration Attributes</a>.</p>
pub custom_json: std::option::Option<std::string::String>,
/// <p>The configuration manager. When you create a stack we recommend that you use the configuration manager to specify the Chef version: 12, 11.10, or 11.4 for Linux stacks, or 12.2 for Windows stacks. The default value for Linux stacks is currently 12.</p>
pub configuration_manager: std::option::Option<crate::model::StackConfigurationManager>,
/// <p>A <code>ChefConfiguration</code> object that specifies whether to enable Berkshelf and the
/// Berkshelf version on Chef 11.10 stacks. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New Stack</a>.</p>
pub chef_configuration: std::option::Option<crate::model::ChefConfiguration>,
/// <p>Whether the stack uses custom cookbooks.</p>
pub use_custom_cookbooks: std::option::Option<bool>,
/// <p>Whether to associate the AWS OpsWorks Stacks built-in security groups with the stack's layers.</p>
/// <p>AWS OpsWorks Stacks provides a standard set of built-in security groups, one for each layer, which are
/// associated with layers by default. With <code>UseOpsworksSecurityGroups</code> you can instead
/// provide your own custom security groups. <code>UseOpsworksSecurityGroups</code> has the
/// following settings: </p>
/// <ul>
/// <li>
/// <p>True - AWS OpsWorks Stacks automatically associates the appropriate built-in security group with each layer (default setting). You can associate additional security groups with a layer after you create it, but you cannot delete the built-in security group.</p>
/// </li>
/// <li>
/// <p>False - AWS OpsWorks Stacks does not associate built-in security groups with layers. You must create appropriate EC2 security groups and associate a security group with each layer that you create. However, you can still manually associate a built-in security group with a layer on creation; custom security groups are required only for those layers that need custom settings.</p>
/// </li>
/// </ul>
/// <p>For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New
/// Stack</a>.</p>
pub use_opsworks_security_groups: std::option::Option<bool>,
/// <p>Contains the information required to retrieve an app or cookbook from a repository. For more information,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html">Adding Apps</a> or
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook.html">Cookbooks and Recipes</a>.</p>
pub custom_cookbooks_source: std::option::Option<crate::model::Source>,
/// <p>A default Amazon EC2 key pair name. The default value is none. If you specify a key pair name, AWS
/// OpsWorks installs the public key on the instance and you can use the private key with an SSH
/// client to log in to the instance. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-ssh.html"> Using SSH to
/// Communicate with an Instance</a> and <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-ssh-access.html"> Managing SSH
/// Access</a>. You can override this setting by specifying a different key pair, or no key
/// pair, when you <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-add.html">
/// create an instance</a>. </p>
pub default_ssh_key_name: std::option::Option<std::string::String>,
/// <p>The default root device type. This value is the default for all instances in the stack,
/// but you can override it when you create an instance. The default option is
/// <code>instance-store</code>. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device">Storage for the Root Device</a>.</p>
pub default_root_device_type: std::option::Option<crate::model::RootDeviceType>,
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>Auto-update - Set this parameter to <code>LATEST</code>. AWS OpsWorks Stacks
/// automatically installs new agent versions on the stack's instances as soon as
/// they are available.</p>
/// </li>
/// <li>
/// <p>Fixed version - Set this parameter to your preferred agent version. To update the agent version, you must edit the stack configuration and specify a new version. AWS OpsWorks Stacks then automatically installs that version on the stack's instances.</p>
/// </li>
/// </ul>
/// <p>The default setting is the most recent release of the agent. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>. AgentVersion cannot be set to Chef 12.2.</p>
/// <note>
/// <p>You can also specify an agent version when you create or update an instance, which overrides the stack's default setting.</p>
/// </note>
pub agent_version: std::option::Option<std::string::String>,
}
impl CreateStackInput {
/// <p>The stack name.</p>
pub fn name(&self) -> std::option::Option<&str> {
self.name.as_deref()
}
/// <p>The stack's AWS region, such as <code>ap-south-1</code>. For more information about
/// Amazon regions, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and Endpoints</a>.</p>
/// <note>
/// <p>In the AWS CLI, this API maps to the <code>--stack-region</code> parameter. If the
/// <code>--stack-region</code> parameter and the AWS CLI common parameter
/// <code>--region</code> are set to the same value, the stack uses a
/// <i>regional</i> endpoint. If the <code>--stack-region</code>
/// parameter is not set, but the AWS CLI <code>--region</code> parameter is, this also
/// results in a stack with a <i>regional</i> endpoint. However, if the
/// <code>--region</code> parameter is set to <code>us-east-1</code>, and the
/// <code>--stack-region</code> parameter is set to one of the following, then the
/// stack uses a legacy or <i>classic</i> region: <code>us-west-1,
/// us-west-2, sa-east-1, eu-central-1, eu-west-1, ap-northeast-1, ap-southeast-1,
/// ap-southeast-2</code>. In this case, the actual API endpoint of the stack is in
/// <code>us-east-1</code>. Only the preceding regions are supported as classic
/// regions in the <code>us-east-1</code> API endpoint. Because it is a best practice to
/// choose the regional endpoint that is closest to where you manage AWS, we recommend
/// that you use regional endpoints for new stacks. The AWS CLI common
/// <code>--region</code> parameter always specifies a regional API endpoint; it
/// cannot be used to specify a classic AWS OpsWorks Stacks region.</p>
/// </note>
pub fn region(&self) -> std::option::Option<&str> {
self.region.as_deref()
}
/// <p>The ID of the VPC that the stack is to be launched into. The VPC must be in the stack's region. All instances are launched into this VPC. You cannot change the ID later.</p>
/// <ul>
/// <li>
/// <p>If your account supports EC2-Classic, the default value is <code>no VPC</code>.</p>
/// </li>
/// <li>
/// <p>If your account does not support EC2-Classic, the default value is the default VPC for the specified region.</p>
/// </li>
/// </ul>
/// <p>If the VPC ID corresponds to a default VPC and you have specified either the
/// <code>DefaultAvailabilityZone</code> or the <code>DefaultSubnetId</code> parameter only,
/// AWS OpsWorks Stacks infers the value of the
/// other parameter. If you specify neither parameter, AWS OpsWorks Stacks sets
/// these parameters to the first valid Availability Zone for the specified region and the
/// corresponding default VPC subnet ID, respectively.</p>
/// <p>If you specify a nondefault VPC ID, note the following:</p>
/// <ul>
/// <li>
/// <p>It must belong to a VPC in your account that is in the specified region.</p>
/// </li>
/// <li>
/// <p>You must specify a value for <code>DefaultSubnetId</code>.</p>
/// </li>
/// </ul>
/// <p>For more information about how to use AWS OpsWorks Stacks with a VPC, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-vpc.html">Running a Stack in a
/// VPC</a>. For more information about default VPC and EC2-Classic, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html">Supported
/// Platforms</a>. </p>
pub fn vpc_id(&self) -> std::option::Option<&str> {
self.vpc_id.as_deref()
}
/// <p>One or more user-defined key-value pairs to be added to the stack attributes.</p>
pub fn attributes(
&self,
) -> std::option::Option<
&std::collections::HashMap<crate::model::StackAttributesKeys, std::string::String>,
> {
self.attributes.as_ref()
}
/// <p>The stack's AWS Identity and Access Management (IAM) role, which allows AWS OpsWorks Stacks to work with AWS
/// resources on your behalf. You must set this parameter to the Amazon Resource Name (ARN) for an
/// existing IAM role. For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub fn service_role_arn(&self) -> std::option::Option<&str> {
self.service_role_arn.as_deref()
}
/// <p>The Amazon Resource Name (ARN) of an IAM profile that is the default profile for all of the stack's EC2 instances.
/// For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub fn default_instance_profile_arn(&self) -> std::option::Option<&str> {
self.default_instance_profile_arn.as_deref()
}
/// <p>The stack's default operating system, which is installed on every instance unless you specify a different operating system when you create the instance. You can specify one of the following.</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux 2016.09</code>,
/// <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>A supported Windows operating system, such as <code>Microsoft Windows Server 2012 R2 Base</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// <li>
/// <p>A custom AMI: <code>Custom</code>. You specify the custom AMI you want to use when
/// you create instances. For more
/// information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">
/// Using Custom AMIs</a>.</p>
/// </li>
/// </ul>
/// <p>The default option is the current Amazon Linux version.
/// For more information about supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
pub fn default_os(&self) -> std::option::Option<&str> {
self.default_os.as_deref()
}
/// <p>The stack's host name theme, with spaces replaced by underscores. The theme is used to
/// generate host names for the stack's instances. By default, <code>HostnameTheme</code> is set
/// to <code>Layer_Dependent</code>, which creates host names by appending integers to the layer's
/// short name. The other themes are:</p>
/// <ul>
/// <li>
/// <p>
/// <code>Baked_Goods</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Clouds</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Europe_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Fruits</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Greek_Deities_and_Titans</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Legendary_creatures_from_Japan</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Planets_and_Moons</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Roman_Deities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Scottish_Islands</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>US_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Wild_Cats</code>
/// </p>
/// </li>
/// </ul>
/// <p>To obtain a generated host name, call <code>GetHostNameSuggestion</code>, which returns a
/// host name based on the current theme.</p>
pub fn hostname_theme(&self) -> std::option::Option<&str> {
self.hostname_theme.as_deref()
}
/// <p>The stack's default Availability Zone, which must be in the specified region. For more
/// information, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and
/// Endpoints</a>. If you also specify a value for <code>DefaultSubnetId</code>, the subnet must
/// be in the same zone. For more information, see the <code>VpcId</code> parameter description.
/// </p>
pub fn default_availability_zone(&self) -> std::option::Option<&str> {
self.default_availability_zone.as_deref()
}
/// <p>The stack's default VPC subnet ID. This parameter is required if you specify a value for the
/// <code>VpcId</code> parameter. All instances are launched into this subnet unless you specify
/// otherwise when you create the instance. If you also specify a value for
/// <code>DefaultAvailabilityZone</code>, the subnet must be in that zone. For information on
/// default values and when this parameter is required, see the <code>VpcId</code> parameter
/// description. </p>
pub fn default_subnet_id(&self) -> std::option::Option<&str> {
self.default_subnet_id.as_deref()
}
/// <p>A string that contains user-defined, custom JSON. It can be used to override the corresponding default stack configuration attribute values or to pass data to recipes. The string should be in the following format:</p>
/// <p>
/// <code>"{\"key1\": \"value1\", \"key2\": \"value2\",...}"</code>
/// </p>
/// <p>For more information about custom JSON, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html">Use Custom JSON to
/// Modify the Stack Configuration Attributes</a>.</p>
pub fn custom_json(&self) -> std::option::Option<&str> {
self.custom_json.as_deref()
}
/// <p>The configuration manager. When you create a stack we recommend that you use the configuration manager to specify the Chef version: 12, 11.10, or 11.4 for Linux stacks, or 12.2 for Windows stacks. The default value for Linux stacks is currently 12.</p>
pub fn configuration_manager(
&self,
) -> std::option::Option<&crate::model::StackConfigurationManager> {
self.configuration_manager.as_ref()
}
/// <p>A <code>ChefConfiguration</code> object that specifies whether to enable Berkshelf and the
/// Berkshelf version on Chef 11.10 stacks. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New Stack</a>.</p>
pub fn chef_configuration(&self) -> std::option::Option<&crate::model::ChefConfiguration> {
self.chef_configuration.as_ref()
}
/// <p>Whether the stack uses custom cookbooks.</p>
pub fn use_custom_cookbooks(&self) -> std::option::Option<bool> {
self.use_custom_cookbooks
}
/// <p>Whether to associate the AWS OpsWorks Stacks built-in security groups with the stack's layers.</p>
/// <p>AWS OpsWorks Stacks provides a standard set of built-in security groups, one for each layer, which are
/// associated with layers by default. With <code>UseOpsworksSecurityGroups</code> you can instead
/// provide your own custom security groups. <code>UseOpsworksSecurityGroups</code> has the
/// following settings: </p>
/// <ul>
/// <li>
/// <p>True - AWS OpsWorks Stacks automatically associates the appropriate built-in security group with each layer (default setting). You can associate additional security groups with a layer after you create it, but you cannot delete the built-in security group.</p>
/// </li>
/// <li>
/// <p>False - AWS OpsWorks Stacks does not associate built-in security groups with layers. You must create appropriate EC2 security groups and associate a security group with each layer that you create. However, you can still manually associate a built-in security group with a layer on creation; custom security groups are required only for those layers that need custom settings.</p>
/// </li>
/// </ul>
/// <p>For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New
/// Stack</a>.</p>
pub fn use_opsworks_security_groups(&self) -> std::option::Option<bool> {
self.use_opsworks_security_groups
}
/// <p>Contains the information required to retrieve an app or cookbook from a repository. For more information,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html">Adding Apps</a> or
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook.html">Cookbooks and Recipes</a>.</p>
pub fn custom_cookbooks_source(&self) -> std::option::Option<&crate::model::Source> {
self.custom_cookbooks_source.as_ref()
}
/// <p>A default Amazon EC2 key pair name. The default value is none. If you specify a key pair name, AWS
/// OpsWorks installs the public key on the instance and you can use the private key with an SSH
/// client to log in to the instance. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-ssh.html"> Using SSH to
/// Communicate with an Instance</a> and <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-ssh-access.html"> Managing SSH
/// Access</a>. You can override this setting by specifying a different key pair, or no key
/// pair, when you <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-add.html">
/// create an instance</a>. </p>
pub fn default_ssh_key_name(&self) -> std::option::Option<&str> {
self.default_ssh_key_name.as_deref()
}
/// <p>The default root device type. This value is the default for all instances in the stack,
/// but you can override it when you create an instance. The default option is
/// <code>instance-store</code>. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device">Storage for the Root Device</a>.</p>
pub fn default_root_device_type(&self) -> std::option::Option<&crate::model::RootDeviceType> {
self.default_root_device_type.as_ref()
}
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>Auto-update - Set this parameter to <code>LATEST</code>. AWS OpsWorks Stacks
/// automatically installs new agent versions on the stack's instances as soon as
/// they are available.</p>
/// </li>
/// <li>
/// <p>Fixed version - Set this parameter to your preferred agent version. To update the agent version, you must edit the stack configuration and specify a new version. AWS OpsWorks Stacks then automatically installs that version on the stack's instances.</p>
/// </li>
/// </ul>
/// <p>The default setting is the most recent release of the agent. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>. AgentVersion cannot be set to Chef 12.2.</p>
/// <note>
/// <p>You can also specify an agent version when you create or update an instance, which overrides the stack's default setting.</p>
/// </note>
pub fn agent_version(&self) -> std::option::Option<&str> {
self.agent_version.as_deref()
}
}
impl std::fmt::Debug for CreateStackInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("CreateStackInput");
formatter.field("name", &self.name);
formatter.field("region", &self.region);
formatter.field("vpc_id", &self.vpc_id);
formatter.field("attributes", &self.attributes);
formatter.field("service_role_arn", &self.service_role_arn);
formatter.field(
"default_instance_profile_arn",
&self.default_instance_profile_arn,
);
formatter.field("default_os", &self.default_os);
formatter.field("hostname_theme", &self.hostname_theme);
formatter.field("default_availability_zone", &self.default_availability_zone);
formatter.field("default_subnet_id", &self.default_subnet_id);
formatter.field("custom_json", &self.custom_json);
formatter.field("configuration_manager", &self.configuration_manager);
formatter.field("chef_configuration", &self.chef_configuration);
formatter.field("use_custom_cookbooks", &self.use_custom_cookbooks);
formatter.field(
"use_opsworks_security_groups",
&self.use_opsworks_security_groups,
);
formatter.field("custom_cookbooks_source", &self.custom_cookbooks_source);
formatter.field("default_ssh_key_name", &self.default_ssh_key_name);
formatter.field("default_root_device_type", &self.default_root_device_type);
formatter.field("agent_version", &self.agent_version);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct CreateLayerInput {
/// <p>The layer stack ID.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>The layer type. A stack cannot have more than one built-in layer of the same type. It can have any number of custom layers. Built-in layers are not available in Chef 12 stacks.</p>
pub r#type: std::option::Option<crate::model::LayerType>,
/// <p>The layer name, which is used by the console.</p>
pub name: std::option::Option<std::string::String>,
/// <p>For custom layers only, use this parameter to specify the layer's short name, which is used internally by AWS OpsWorks Stacks and by Chef recipes. The short name is also used as the name for the directory where your app files are installed. It can have a maximum of 200 characters, which are limited to the alphanumeric characters, '-', '_', and '.'.</p>
/// <p>The built-in layers' short names are defined by AWS OpsWorks Stacks. For more information, see the <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/layers.html">Layer Reference</a>.</p>
pub shortname: std::option::Option<std::string::String>,
/// <p>One or more user-defined key-value pairs to be added to the stack attributes.</p>
/// <p>To create a cluster layer, set the <code>EcsClusterArn</code> attribute to the cluster's ARN.</p>
pub attributes: std::option::Option<
std::collections::HashMap<crate::model::LayerAttributesKeys, std::string::String>,
>,
/// <p>Specifies CloudWatch Logs configuration options for the layer. For more information, see <a>CloudWatchLogsLogStream</a>.</p>
pub cloud_watch_logs_configuration:
std::option::Option<crate::model::CloudWatchLogsConfiguration>,
/// <p>The ARN of an IAM profile to be used for the layer's EC2 instances. For more information
/// about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using Identifiers</a>.</p>
pub custom_instance_profile_arn: std::option::Option<std::string::String>,
/// <p>A JSON-formatted string containing custom stack configuration and deployment attributes
/// to be installed on the layer's instances. For more information, see
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-json-override.html">
/// Using Custom JSON</a>. This feature is supported as of version 1.7.42 of the AWS CLI.
/// </p>
pub custom_json: std::option::Option<std::string::String>,
/// <p>An array containing the layer custom security group IDs.</p>
pub custom_security_group_ids: std::option::Option<std::vec::Vec<std::string::String>>,
/// <p>An array of <code>Package</code> objects that describes the layer packages.</p>
pub packages: std::option::Option<std::vec::Vec<std::string::String>>,
/// <p>A <code>VolumeConfigurations</code> object that describes the layer's Amazon EBS volumes.</p>
pub volume_configurations:
std::option::Option<std::vec::Vec<crate::model::VolumeConfiguration>>,
/// <p>Whether to disable auto healing for the layer.</p>
pub enable_auto_healing: std::option::Option<bool>,
/// <p>Whether to automatically assign an <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html">Elastic IP
/// address</a> to the layer's instances. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html">How to Edit
/// a Layer</a>.</p>
pub auto_assign_elastic_ips: std::option::Option<bool>,
/// <p>For stacks that are running in a VPC, whether to automatically assign a public IP address to
/// the layer's instances. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html">How to Edit
/// a Layer</a>.</p>
pub auto_assign_public_ips: std::option::Option<bool>,
/// <p>A <code>LayerCustomRecipes</code> object that specifies the layer custom recipes.</p>
pub custom_recipes: std::option::Option<crate::model::Recipes>,
/// <p>Whether to install operating system and package updates when the instance boots. The default
/// value is <code>true</code>. To control when updates are installed, set this value to
/// <code>false</code>. You must then update your instances manually by using
/// <a>CreateDeployment</a> to run the <code>update_dependencies</code> stack command or
/// by manually running <code>yum</code> (Amazon Linux) or <code>apt-get</code> (Ubuntu) on the
/// instances. </p>
/// <note>
/// <p>To ensure that your
/// instances have the latest security updates, we strongly recommend using the default value of <code>true</code>.</p>
/// </note>
pub install_updates_on_boot: std::option::Option<bool>,
/// <p>Whether to use Amazon EBS-optimized instances.</p>
pub use_ebs_optimized_instances: std::option::Option<bool>,
/// <p>A <code>LifeCycleEventConfiguration</code> object that you can use to configure the Shutdown event to
/// specify an execution timeout and enable or disable Elastic Load Balancer connection
/// draining.</p>
pub lifecycle_event_configuration:
std::option::Option<crate::model::LifecycleEventConfiguration>,
}
impl CreateLayerInput {
/// <p>The layer stack ID.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>The layer type. A stack cannot have more than one built-in layer of the same type. It can have any number of custom layers. Built-in layers are not available in Chef 12 stacks.</p>
pub fn r#type(&self) -> std::option::Option<&crate::model::LayerType> {
self.r#type.as_ref()
}
/// <p>The layer name, which is used by the console.</p>
pub fn name(&self) -> std::option::Option<&str> {
self.name.as_deref()
}
/// <p>For custom layers only, use this parameter to specify the layer's short name, which is used internally by AWS OpsWorks Stacks and by Chef recipes. The short name is also used as the name for the directory where your app files are installed. It can have a maximum of 200 characters, which are limited to the alphanumeric characters, '-', '_', and '.'.</p>
/// <p>The built-in layers' short names are defined by AWS OpsWorks Stacks. For more information, see the <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/layers.html">Layer Reference</a>.</p>
pub fn shortname(&self) -> std::option::Option<&str> {
self.shortname.as_deref()
}
/// <p>One or more user-defined key-value pairs to be added to the stack attributes.</p>
/// <p>To create a cluster layer, set the <code>EcsClusterArn</code> attribute to the cluster's ARN.</p>
pub fn attributes(
&self,
) -> std::option::Option<
&std::collections::HashMap<crate::model::LayerAttributesKeys, std::string::String>,
> {
self.attributes.as_ref()
}
/// <p>Specifies CloudWatch Logs configuration options for the layer. For more information, see <a>CloudWatchLogsLogStream</a>.</p>
pub fn cloud_watch_logs_configuration(
&self,
) -> std::option::Option<&crate::model::CloudWatchLogsConfiguration> {
self.cloud_watch_logs_configuration.as_ref()
}
/// <p>The ARN of an IAM profile to be used for the layer's EC2 instances. For more information
/// about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using Identifiers</a>.</p>
pub fn custom_instance_profile_arn(&self) -> std::option::Option<&str> {
self.custom_instance_profile_arn.as_deref()
}
/// <p>A JSON-formatted string containing custom stack configuration and deployment attributes
/// to be installed on the layer's instances. For more information, see
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-json-override.html">
/// Using Custom JSON</a>. This feature is supported as of version 1.7.42 of the AWS CLI.
/// </p>
pub fn custom_json(&self) -> std::option::Option<&str> {
self.custom_json.as_deref()
}
/// <p>An array containing the layer custom security group IDs.</p>
pub fn custom_security_group_ids(&self) -> std::option::Option<&[std::string::String]> {
self.custom_security_group_ids.as_deref()
}
/// <p>An array of <code>Package</code> objects that describes the layer packages.</p>
pub fn packages(&self) -> std::option::Option<&[std::string::String]> {
self.packages.as_deref()
}
/// <p>A <code>VolumeConfigurations</code> object that describes the layer's Amazon EBS volumes.</p>
pub fn volume_configurations(
&self,
) -> std::option::Option<&[crate::model::VolumeConfiguration]> {
self.volume_configurations.as_deref()
}
/// <p>Whether to disable auto healing for the layer.</p>
pub fn enable_auto_healing(&self) -> std::option::Option<bool> {
self.enable_auto_healing
}
/// <p>Whether to automatically assign an <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html">Elastic IP
/// address</a> to the layer's instances. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html">How to Edit
/// a Layer</a>.</p>
pub fn auto_assign_elastic_ips(&self) -> std::option::Option<bool> {
self.auto_assign_elastic_ips
}
/// <p>For stacks that are running in a VPC, whether to automatically assign a public IP address to
/// the layer's instances. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-basics-edit.html">How to Edit
/// a Layer</a>.</p>
pub fn auto_assign_public_ips(&self) -> std::option::Option<bool> {
self.auto_assign_public_ips
}
/// <p>A <code>LayerCustomRecipes</code> object that specifies the layer custom recipes.</p>
pub fn custom_recipes(&self) -> std::option::Option<&crate::model::Recipes> {
self.custom_recipes.as_ref()
}
/// <p>Whether to install operating system and package updates when the instance boots. The default
/// value is <code>true</code>. To control when updates are installed, set this value to
/// <code>false</code>. You must then update your instances manually by using
/// <a>CreateDeployment</a> to run the <code>update_dependencies</code> stack command or
/// by manually running <code>yum</code> (Amazon Linux) or <code>apt-get</code> (Ubuntu) on the
/// instances. </p>
/// <note>
/// <p>To ensure that your
/// instances have the latest security updates, we strongly recommend using the default value of <code>true</code>.</p>
/// </note>
pub fn install_updates_on_boot(&self) -> std::option::Option<bool> {
self.install_updates_on_boot
}
/// <p>Whether to use Amazon EBS-optimized instances.</p>
pub fn use_ebs_optimized_instances(&self) -> std::option::Option<bool> {
self.use_ebs_optimized_instances
}
/// <p>A <code>LifeCycleEventConfiguration</code> object that you can use to configure the Shutdown event to
/// specify an execution timeout and enable or disable Elastic Load Balancer connection
/// draining.</p>
pub fn lifecycle_event_configuration(
&self,
) -> std::option::Option<&crate::model::LifecycleEventConfiguration> {
self.lifecycle_event_configuration.as_ref()
}
}
impl std::fmt::Debug for CreateLayerInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("CreateLayerInput");
formatter.field("stack_id", &self.stack_id);
formatter.field("r#type", &self.r#type);
formatter.field("name", &self.name);
formatter.field("shortname", &self.shortname);
formatter.field("attributes", &self.attributes);
formatter.field(
"cloud_watch_logs_configuration",
&self.cloud_watch_logs_configuration,
);
formatter.field(
"custom_instance_profile_arn",
&self.custom_instance_profile_arn,
);
formatter.field("custom_json", &self.custom_json);
formatter.field("custom_security_group_ids", &self.custom_security_group_ids);
formatter.field("packages", &self.packages);
formatter.field("volume_configurations", &self.volume_configurations);
formatter.field("enable_auto_healing", &self.enable_auto_healing);
formatter.field("auto_assign_elastic_ips", &self.auto_assign_elastic_ips);
formatter.field("auto_assign_public_ips", &self.auto_assign_public_ips);
formatter.field("custom_recipes", &self.custom_recipes);
formatter.field("install_updates_on_boot", &self.install_updates_on_boot);
formatter.field(
"use_ebs_optimized_instances",
&self.use_ebs_optimized_instances,
);
formatter.field(
"lifecycle_event_configuration",
&self.lifecycle_event_configuration,
);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct CreateInstanceInput {
/// <p>The stack ID.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>An array that contains the instance's layer IDs.</p>
pub layer_ids: std::option::Option<std::vec::Vec<std::string::String>>,
/// <p>The instance type, such as <code>t2.micro</code>. For a list of supported instance types,
/// open the stack in the console, choose <b>Instances</b>, and choose <b>+ Instance</b>.
/// The <b>Size</b> list contains the currently supported types. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html">Instance
/// Families and Types</a>. The parameter values that you use to specify the various types are
/// in the <b>API Name</b> column of the <b>Available Instance Types</b> table.</p>
pub instance_type: std::option::Option<std::string::String>,
/// <p>For load-based or time-based instances, the type. Windows stacks can use only time-based instances.</p>
pub auto_scaling_type: std::option::Option<crate::model::AutoScalingType>,
/// <p>The instance host name.</p>
pub hostname: std::option::Option<std::string::String>,
/// <p>The instance's operating system, which must be set to one of the following.</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux 2016.09</code>,
/// <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>A supported Windows operating system, such as <code>Microsoft Windows Server 2012 R2 Base</code>, <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// <li>
/// <p>A custom AMI: <code>Custom</code>.</p>
/// </li>
/// </ul>
/// <p>For more information about the supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
/// <p>The default option is the current Amazon Linux version. If you set this parameter to
/// <code>Custom</code>, you must use the <a>CreateInstance</a> action's AmiId parameter to
/// specify the custom AMI that you want to use. Block device mappings are not supported if the value is <code>Custom</code>. For more information about supported operating
/// systems, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">Operating Systems</a>For more information about how to use custom AMIs with AWS OpsWorks Stacks, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using
/// Custom AMIs</a>.</p>
pub os: std::option::Option<std::string::String>,
/// <p>A custom AMI ID to be used to create the instance. The AMI should be based on one of the
/// supported operating systems.
/// For more information, see
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using Custom AMIs</a>.</p>
/// <note>
/// <p>If you specify a custom AMI, you must set <code>Os</code> to <code>Custom</code>.</p>
/// </note>
pub ami_id: std::option::Option<std::string::String>,
/// <p>The instance's Amazon EC2 key-pair name.</p>
pub ssh_key_name: std::option::Option<std::string::String>,
/// <p>The instance Availability Zone. For more information, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and Endpoints</a>.</p>
pub availability_zone: std::option::Option<std::string::String>,
/// <p>The instance's virtualization type, <code>paravirtual</code> or <code>hvm</code>.</p>
pub virtualization_type: std::option::Option<std::string::String>,
/// <p>The ID of the instance's subnet. If the stack is running in a VPC, you can use this parameter to override the stack's default subnet ID value and direct AWS OpsWorks Stacks to launch the instance in a different subnet.</p>
pub subnet_id: std::option::Option<std::string::String>,
/// <p>The instance architecture. The default option is <code>x86_64</code>. Instance types do not
/// necessarily support both architectures. For a list of the architectures that are supported by
/// the different instance types, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html">Instance Families and
/// Types</a>.</p>
pub architecture: std::option::Option<crate::model::Architecture>,
/// <p>The instance root device type. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device">Storage for the Root Device</a>.</p>
pub root_device_type: std::option::Option<crate::model::RootDeviceType>,
/// <p>An array of <code>BlockDeviceMapping</code> objects that specify the instance's block
/// devices. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html">Block
/// Device Mapping</a>. Note that block device mappings are not supported for custom AMIs.</p>
pub block_device_mappings: std::option::Option<std::vec::Vec<crate::model::BlockDeviceMapping>>,
/// <p>Whether to install operating system and package updates when the instance boots. The default
/// value is <code>true</code>. To control when updates are installed, set this value to
/// <code>false</code>. You must then update your instances manually by using
/// <a>CreateDeployment</a> to run the <code>update_dependencies</code> stack command or
/// by manually running <code>yum</code> (Amazon Linux) or <code>apt-get</code> (Ubuntu) on the
/// instances. </p>
/// <note>
/// <p>We strongly recommend using the default value of <code>true</code> to ensure that your
/// instances have the latest security updates.</p>
/// </note>
pub install_updates_on_boot: std::option::Option<bool>,
/// <p>Whether to create an Amazon EBS-optimized instance.</p>
pub ebs_optimized: std::option::Option<bool>,
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>
/// <code>INHERIT</code> - Use the stack's default agent version setting.</p>
/// </li>
/// <li>
/// <p>
/// <i>version_number</i> - Use the specified agent version.
/// This value overrides the stack's default setting.
/// To update the agent version, edit the instance configuration and specify a
/// new version.
/// AWS OpsWorks Stacks then automatically installs that version on the instance.</p>
/// </li>
/// </ul>
/// <p>The default setting is <code>INHERIT</code>. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>. AgentVersion cannot be set to Chef 12.2.</p>
pub agent_version: std::option::Option<std::string::String>,
/// <p>The instance's tenancy option. The default option is no tenancy, or if the instance is running in a VPC, inherit tenancy settings from the VPC. The following are valid values for this parameter: <code>dedicated</code>, <code>default</code>, or <code>host</code>. Because there are costs associated with changes in tenancy options, we recommend that you research tenancy options before choosing them for your instances. For more information about dedicated hosts, see <a href="http://aws.amazon.com/ec2/dedicated-hosts/">Dedicated Hosts Overview</a> and <a href="http://aws.amazon.com/ec2/dedicated-hosts/">Amazon EC2 Dedicated Hosts</a>. For more information about dedicated instances, see <a href="https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/dedicated-instance.html">Dedicated Instances</a> and <a href="http://aws.amazon.com/ec2/purchasing-options/dedicated-instances/">Amazon EC2 Dedicated Instances</a>.</p>
pub tenancy: std::option::Option<std::string::String>,
}
impl CreateInstanceInput {
/// <p>The stack ID.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>An array that contains the instance's layer IDs.</p>
pub fn layer_ids(&self) -> std::option::Option<&[std::string::String]> {
self.layer_ids.as_deref()
}
/// <p>The instance type, such as <code>t2.micro</code>. For a list of supported instance types,
/// open the stack in the console, choose <b>Instances</b>, and choose <b>+ Instance</b>.
/// The <b>Size</b> list contains the currently supported types. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html">Instance
/// Families and Types</a>. The parameter values that you use to specify the various types are
/// in the <b>API Name</b> column of the <b>Available Instance Types</b> table.</p>
pub fn instance_type(&self) -> std::option::Option<&str> {
self.instance_type.as_deref()
}
/// <p>For load-based or time-based instances, the type. Windows stacks can use only time-based instances.</p>
pub fn auto_scaling_type(&self) -> std::option::Option<&crate::model::AutoScalingType> {
self.auto_scaling_type.as_ref()
}
/// <p>The instance host name.</p>
pub fn hostname(&self) -> std::option::Option<&str> {
self.hostname.as_deref()
}
/// <p>The instance's operating system, which must be set to one of the following.</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux 2016.09</code>,
/// <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>A supported Windows operating system, such as <code>Microsoft Windows Server 2012 R2 Base</code>, <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// <li>
/// <p>A custom AMI: <code>Custom</code>.</p>
/// </li>
/// </ul>
/// <p>For more information about the supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
/// <p>The default option is the current Amazon Linux version. If you set this parameter to
/// <code>Custom</code>, you must use the <a>CreateInstance</a> action's AmiId parameter to
/// specify the custom AMI that you want to use. Block device mappings are not supported if the value is <code>Custom</code>. For more information about supported operating
/// systems, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">Operating Systems</a>For more information about how to use custom AMIs with AWS OpsWorks Stacks, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using
/// Custom AMIs</a>.</p>
pub fn os(&self) -> std::option::Option<&str> {
self.os.as_deref()
}
/// <p>A custom AMI ID to be used to create the instance. The AMI should be based on one of the
/// supported operating systems.
/// For more information, see
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using Custom AMIs</a>.</p>
/// <note>
/// <p>If you specify a custom AMI, you must set <code>Os</code> to <code>Custom</code>.</p>
/// </note>
pub fn ami_id(&self) -> std::option::Option<&str> {
self.ami_id.as_deref()
}
/// <p>The instance's Amazon EC2 key-pair name.</p>
pub fn ssh_key_name(&self) -> std::option::Option<&str> {
self.ssh_key_name.as_deref()
}
/// <p>The instance Availability Zone. For more information, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and Endpoints</a>.</p>
pub fn availability_zone(&self) -> std::option::Option<&str> {
self.availability_zone.as_deref()
}
/// <p>The instance's virtualization type, <code>paravirtual</code> or <code>hvm</code>.</p>
pub fn virtualization_type(&self) -> std::option::Option<&str> {
self.virtualization_type.as_deref()
}
/// <p>The ID of the instance's subnet. If the stack is running in a VPC, you can use this parameter to override the stack's default subnet ID value and direct AWS OpsWorks Stacks to launch the instance in a different subnet.</p>
pub fn subnet_id(&self) -> std::option::Option<&str> {
self.subnet_id.as_deref()
}
/// <p>The instance architecture. The default option is <code>x86_64</code>. Instance types do not
/// necessarily support both architectures. For a list of the architectures that are supported by
/// the different instance types, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html">Instance Families and
/// Types</a>.</p>
pub fn architecture(&self) -> std::option::Option<&crate::model::Architecture> {
self.architecture.as_ref()
}
/// <p>The instance root device type. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device">Storage for the Root Device</a>.</p>
pub fn root_device_type(&self) -> std::option::Option<&crate::model::RootDeviceType> {
self.root_device_type.as_ref()
}
/// <p>An array of <code>BlockDeviceMapping</code> objects that specify the instance's block
/// devices. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html">Block
/// Device Mapping</a>. Note that block device mappings are not supported for custom AMIs.</p>
pub fn block_device_mappings(
&self,
) -> std::option::Option<&[crate::model::BlockDeviceMapping]> {
self.block_device_mappings.as_deref()
}
/// <p>Whether to install operating system and package updates when the instance boots. The default
/// value is <code>true</code>. To control when updates are installed, set this value to
/// <code>false</code>. You must then update your instances manually by using
/// <a>CreateDeployment</a> to run the <code>update_dependencies</code> stack command or
/// by manually running <code>yum</code> (Amazon Linux) or <code>apt-get</code> (Ubuntu) on the
/// instances. </p>
/// <note>
/// <p>We strongly recommend using the default value of <code>true</code> to ensure that your
/// instances have the latest security updates.</p>
/// </note>
pub fn install_updates_on_boot(&self) -> std::option::Option<bool> {
self.install_updates_on_boot
}
/// <p>Whether to create an Amazon EBS-optimized instance.</p>
pub fn ebs_optimized(&self) -> std::option::Option<bool> {
self.ebs_optimized
}
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>
/// <code>INHERIT</code> - Use the stack's default agent version setting.</p>
/// </li>
/// <li>
/// <p>
/// <i>version_number</i> - Use the specified agent version.
/// This value overrides the stack's default setting.
/// To update the agent version, edit the instance configuration and specify a
/// new version.
/// AWS OpsWorks Stacks then automatically installs that version on the instance.</p>
/// </li>
/// </ul>
/// <p>The default setting is <code>INHERIT</code>. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>. AgentVersion cannot be set to Chef 12.2.</p>
pub fn agent_version(&self) -> std::option::Option<&str> {
self.agent_version.as_deref()
}
/// <p>The instance's tenancy option. The default option is no tenancy, or if the instance is running in a VPC, inherit tenancy settings from the VPC. The following are valid values for this parameter: <code>dedicated</code>, <code>default</code>, or <code>host</code>. Because there are costs associated with changes in tenancy options, we recommend that you research tenancy options before choosing them for your instances. For more information about dedicated hosts, see <a href="http://aws.amazon.com/ec2/dedicated-hosts/">Dedicated Hosts Overview</a> and <a href="http://aws.amazon.com/ec2/dedicated-hosts/">Amazon EC2 Dedicated Hosts</a>. For more information about dedicated instances, see <a href="https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/dedicated-instance.html">Dedicated Instances</a> and <a href="http://aws.amazon.com/ec2/purchasing-options/dedicated-instances/">Amazon EC2 Dedicated Instances</a>.</p>
pub fn tenancy(&self) -> std::option::Option<&str> {
self.tenancy.as_deref()
}
}
impl std::fmt::Debug for CreateInstanceInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("CreateInstanceInput");
formatter.field("stack_id", &self.stack_id);
formatter.field("layer_ids", &self.layer_ids);
formatter.field("instance_type", &self.instance_type);
formatter.field("auto_scaling_type", &self.auto_scaling_type);
formatter.field("hostname", &self.hostname);
formatter.field("os", &self.os);
formatter.field("ami_id", &self.ami_id);
formatter.field("ssh_key_name", &self.ssh_key_name);
formatter.field("availability_zone", &self.availability_zone);
formatter.field("virtualization_type", &self.virtualization_type);
formatter.field("subnet_id", &self.subnet_id);
formatter.field("architecture", &self.architecture);
formatter.field("root_device_type", &self.root_device_type);
formatter.field("block_device_mappings", &self.block_device_mappings);
formatter.field("install_updates_on_boot", &self.install_updates_on_boot);
formatter.field("ebs_optimized", &self.ebs_optimized);
formatter.field("agent_version", &self.agent_version);
formatter.field("tenancy", &self.tenancy);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct CreateDeploymentInput {
/// <p>The stack ID.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>The app ID. This parameter is required for app deployments, but not for other deployment commands.</p>
pub app_id: std::option::Option<std::string::String>,
/// <p>The instance IDs for the deployment targets.</p>
pub instance_ids: std::option::Option<std::vec::Vec<std::string::String>>,
/// <p>The layer IDs for the deployment targets.</p>
pub layer_ids: std::option::Option<std::vec::Vec<std::string::String>>,
/// <p>A <code>DeploymentCommand</code> object that specifies the deployment command and any
/// associated arguments.</p>
pub command: std::option::Option<crate::model::DeploymentCommand>,
/// <p>A user-defined comment.</p>
pub comment: std::option::Option<std::string::String>,
/// <p>A string that contains user-defined, custom JSON. You can use this parameter to override some corresponding default stack configuration JSON values. The string should be in the following format:</p>
/// <p>
/// <code>"{\"key1\": \"value1\", \"key2\": \"value2\",...}"</code>
/// </p>
/// <p>For more information about custom JSON, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html">Use Custom JSON to
/// Modify the Stack Configuration Attributes</a> and
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-json-override.html">Overriding Attributes With Custom JSON</a>.</p>
pub custom_json: std::option::Option<std::string::String>,
}
impl CreateDeploymentInput {
/// <p>The stack ID.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>The app ID. This parameter is required for app deployments, but not for other deployment commands.</p>
pub fn app_id(&self) -> std::option::Option<&str> {
self.app_id.as_deref()
}
/// <p>The instance IDs for the deployment targets.</p>
pub fn instance_ids(&self) -> std::option::Option<&[std::string::String]> {
self.instance_ids.as_deref()
}
/// <p>The layer IDs for the deployment targets.</p>
pub fn layer_ids(&self) -> std::option::Option<&[std::string::String]> {
self.layer_ids.as_deref()
}
/// <p>A <code>DeploymentCommand</code> object that specifies the deployment command and any
/// associated arguments.</p>
pub fn command(&self) -> std::option::Option<&crate::model::DeploymentCommand> {
self.command.as_ref()
}
/// <p>A user-defined comment.</p>
pub fn comment(&self) -> std::option::Option<&str> {
self.comment.as_deref()
}
/// <p>A string that contains user-defined, custom JSON. You can use this parameter to override some corresponding default stack configuration JSON values. The string should be in the following format:</p>
/// <p>
/// <code>"{\"key1\": \"value1\", \"key2\": \"value2\",...}"</code>
/// </p>
/// <p>For more information about custom JSON, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html">Use Custom JSON to
/// Modify the Stack Configuration Attributes</a> and
/// <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-json-override.html">Overriding Attributes With Custom JSON</a>.</p>
pub fn custom_json(&self) -> std::option::Option<&str> {
self.custom_json.as_deref()
}
}
impl std::fmt::Debug for CreateDeploymentInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("CreateDeploymentInput");
formatter.field("stack_id", &self.stack_id);
formatter.field("app_id", &self.app_id);
formatter.field("instance_ids", &self.instance_ids);
formatter.field("layer_ids", &self.layer_ids);
formatter.field("command", &self.command);
formatter.field("comment", &self.comment);
formatter.field("custom_json", &self.custom_json);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct CreateAppInput {
/// <p>The stack ID.</p>
pub stack_id: std::option::Option<std::string::String>,
/// <p>The app's short name.</p>
pub shortname: std::option::Option<std::string::String>,
/// <p>The app name.</p>
pub name: std::option::Option<std::string::String>,
/// <p>A description of the app.</p>
pub description: std::option::Option<std::string::String>,
/// <p>The app's data source.</p>
pub data_sources: std::option::Option<std::vec::Vec<crate::model::DataSource>>,
/// <p>The app type. Each supported type is associated with a particular layer. For example, PHP
/// applications are associated with a PHP layer. AWS OpsWorks Stacks deploys an application to those instances
/// that are members of the corresponding layer. If your app isn't one of the standard types, or
/// you prefer to implement your own Deploy recipes, specify <code>other</code>.</p>
pub r#type: std::option::Option<crate::model::AppType>,
/// <p>A <code>Source</code> object that specifies the app repository.</p>
pub app_source: std::option::Option<crate::model::Source>,
/// <p>The app virtual host settings, with multiple domains separated by commas. For example:
/// <code>'www.example.com, example.com'</code>
/// </p>
pub domains: std::option::Option<std::vec::Vec<std::string::String>>,
/// <p>Whether to enable SSL for the app.</p>
pub enable_ssl: std::option::Option<bool>,
/// <p>An <code>SslConfiguration</code> object with the SSL configuration.</p>
pub ssl_configuration: std::option::Option<crate::model::SslConfiguration>,
/// <p>One or more user-defined key/value pairs to be added to the stack attributes.</p>
pub attributes: std::option::Option<
std::collections::HashMap<crate::model::AppAttributesKeys, std::string::String>,
>,
/// <p>An array of <code>EnvironmentVariable</code> objects that specify environment variables to be
/// associated with the app. After you deploy the app, these variables are defined on the
/// associated app server instance. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html#workingapps-creating-environment"> Environment Variables</a>.</p>
/// <p>There is no specific limit on the number of environment variables. However, the size of the associated data structure - which includes the variables' names, values, and protected flag values - cannot exceed 20 KB. This limit should accommodate most if not all use cases. Exceeding it will cause an exception with the message, "Environment: is too large (maximum is 20KB)."</p>
/// <note>
/// <p>If you have specified one or more environment variables, you cannot modify the stack's Chef version.</p>
/// </note>
pub environment: std::option::Option<std::vec::Vec<crate::model::EnvironmentVariable>>,
}
impl CreateAppInput {
/// <p>The stack ID.</p>
pub fn stack_id(&self) -> std::option::Option<&str> {
self.stack_id.as_deref()
}
/// <p>The app's short name.</p>
pub fn shortname(&self) -> std::option::Option<&str> {
self.shortname.as_deref()
}
/// <p>The app name.</p>
pub fn name(&self) -> std::option::Option<&str> {
self.name.as_deref()
}
/// <p>A description of the app.</p>
pub fn description(&self) -> std::option::Option<&str> {
self.description.as_deref()
}
/// <p>The app's data source.</p>
pub fn data_sources(&self) -> std::option::Option<&[crate::model::DataSource]> {
self.data_sources.as_deref()
}
/// <p>The app type. Each supported type is associated with a particular layer. For example, PHP
/// applications are associated with a PHP layer. AWS OpsWorks Stacks deploys an application to those instances
/// that are members of the corresponding layer. If your app isn't one of the standard types, or
/// you prefer to implement your own Deploy recipes, specify <code>other</code>.</p>
pub fn r#type(&self) -> std::option::Option<&crate::model::AppType> {
self.r#type.as_ref()
}
/// <p>A <code>Source</code> object that specifies the app repository.</p>
pub fn app_source(&self) -> std::option::Option<&crate::model::Source> {
self.app_source.as_ref()
}
/// <p>The app virtual host settings, with multiple domains separated by commas. For example:
/// <code>'www.example.com, example.com'</code>
/// </p>
pub fn domains(&self) -> std::option::Option<&[std::string::String]> {
self.domains.as_deref()
}
/// <p>Whether to enable SSL for the app.</p>
pub fn enable_ssl(&self) -> std::option::Option<bool> {
self.enable_ssl
}
/// <p>An <code>SslConfiguration</code> object with the SSL configuration.</p>
pub fn ssl_configuration(&self) -> std::option::Option<&crate::model::SslConfiguration> {
self.ssl_configuration.as_ref()
}
/// <p>One or more user-defined key/value pairs to be added to the stack attributes.</p>
pub fn attributes(
&self,
) -> std::option::Option<
&std::collections::HashMap<crate::model::AppAttributesKeys, std::string::String>,
> {
self.attributes.as_ref()
}
/// <p>An array of <code>EnvironmentVariable</code> objects that specify environment variables to be
/// associated with the app. After you deploy the app, these variables are defined on the
/// associated app server instance. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html#workingapps-creating-environment"> Environment Variables</a>.</p>
/// <p>There is no specific limit on the number of environment variables. However, the size of the associated data structure - which includes the variables' names, values, and protected flag values - cannot exceed 20 KB. This limit should accommodate most if not all use cases. Exceeding it will cause an exception with the message, "Environment: is too large (maximum is 20KB)."</p>
/// <note>
/// <p>If you have specified one or more environment variables, you cannot modify the stack's Chef version.</p>
/// </note>
pub fn environment(&self) -> std::option::Option<&[crate::model::EnvironmentVariable]> {
self.environment.as_deref()
}
}
impl std::fmt::Debug for CreateAppInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("CreateAppInput");
formatter.field("stack_id", &self.stack_id);
formatter.field("shortname", &self.shortname);
formatter.field("name", &self.name);
formatter.field("description", &self.description);
formatter.field("data_sources", &self.data_sources);
formatter.field("r#type", &self.r#type);
formatter.field("app_source", &self.app_source);
formatter.field("domains", &self.domains);
formatter.field("enable_ssl", &self.enable_ssl);
formatter.field("ssl_configuration", &self.ssl_configuration);
formatter.field("attributes", &self.attributes);
formatter.field("environment", &self.environment);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct CloneStackInput {
/// <p>The source stack ID.</p>
pub source_stack_id: std::option::Option<std::string::String>,
/// <p>The cloned stack name.</p>
pub name: std::option::Option<std::string::String>,
/// <p>The cloned stack AWS region, such as "ap-northeast-2". For more information about AWS regions, see
/// <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and Endpoints</a>.</p>
pub region: std::option::Option<std::string::String>,
/// <p>The ID of the VPC that the cloned stack is to be launched into. It must be in the specified region. All
/// instances are launched into this VPC, and you cannot change the ID later.</p>
/// <ul>
/// <li>
/// <p>If your account supports EC2 Classic, the default value is no VPC.</p>
/// </li>
/// <li>
/// <p>If your account does not support EC2 Classic, the default value is the default VPC for the specified region.</p>
/// </li>
/// </ul>
/// <p>If the VPC ID corresponds to a default VPC and you have specified either the
/// <code>DefaultAvailabilityZone</code> or the <code>DefaultSubnetId</code> parameter only,
/// AWS OpsWorks Stacks infers the value of the other parameter. If you specify neither parameter, AWS OpsWorks Stacks sets
/// these parameters to the first valid Availability Zone for the specified region and the
/// corresponding default VPC subnet ID, respectively. </p>
/// <p>If you specify a nondefault VPC ID, note the following:</p>
/// <ul>
/// <li>
/// <p>It must belong to a VPC in your account that is in the specified region.</p>
/// </li>
/// <li>
/// <p>You must specify a value for <code>DefaultSubnetId</code>.</p>
/// </li>
/// </ul>
/// <p>For more information about how to use AWS OpsWorks Stacks with a VPC, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-vpc.html">Running a Stack in a
/// VPC</a>. For more information about default VPC and EC2 Classic, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html">Supported
/// Platforms</a>. </p>
pub vpc_id: std::option::Option<std::string::String>,
/// <p>A list of stack attributes and values as key/value pairs to be added to the cloned stack.</p>
pub attributes: std::option::Option<
std::collections::HashMap<crate::model::StackAttributesKeys, std::string::String>,
>,
/// <p>The stack AWS Identity and Access Management (IAM) role, which allows AWS OpsWorks Stacks to work with AWS
/// resources on your behalf. You must set this parameter to the Amazon Resource Name (ARN) for an
/// existing IAM role. If you create a stack by using the AWS OpsWorks Stacks console, it creates the role for
/// you. You can obtain an existing stack's IAM ARN programmatically by calling
/// <a>DescribePermissions</a>. For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
/// <note>
/// <p>You must set this parameter to a valid service role ARN or the action will fail; there is no default value. You can specify the source stack's service role ARN, if you prefer, but you must do so explicitly.</p>
/// </note>
pub service_role_arn: std::option::Option<std::string::String>,
/// <p>The Amazon Resource Name (ARN) of an IAM profile that is the default profile for all of the stack's EC2 instances.
/// For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub default_instance_profile_arn: std::option::Option<std::string::String>,
/// <p>The stack's operating system, which must be set to one of the following.</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux
/// 2016.09</code>, <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Microsoft Windows Server 2012 R2 Base</code>, <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// <li>
/// <p>A custom AMI: <code>Custom</code>. You specify the custom AMI you want to use when
/// you create instances. For more information about how to use custom AMIs with OpsWorks, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using
/// Custom AMIs</a>.</p>
/// </li>
/// </ul>
/// <p>The default option is the parent stack's operating system.
/// For more information about supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
/// <note>
/// <p>You can specify a different Linux operating system for the cloned stack, but you cannot change from Linux to Windows or Windows to Linux.</p>
/// </note>
pub default_os: std::option::Option<std::string::String>,
/// <p>The stack's host name theme, with spaces are replaced by underscores. The theme is used to
/// generate host names for the stack's instances. By default, <code>HostnameTheme</code> is set
/// to <code>Layer_Dependent</code>, which creates host names by appending integers to the layer's
/// short name. The other themes are:</p>
/// <ul>
/// <li>
/// <p>
/// <code>Baked_Goods</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Clouds</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Europe_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Fruits</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Greek_Deities_and_Titans</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Legendary_creatures_from_Japan</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Planets_and_Moons</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Roman_Deities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Scottish_Islands</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>US_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Wild_Cats</code>
/// </p>
/// </li>
/// </ul>
/// <p>To obtain a generated host name, call <code>GetHostNameSuggestion</code>, which returns a
/// host name based on the current theme.</p>
pub hostname_theme: std::option::Option<std::string::String>,
/// <p>The cloned stack's default Availability Zone, which must be in the specified region. For more
/// information, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and
/// Endpoints</a>. If you also specify a value for <code>DefaultSubnetId</code>, the subnet must
/// be in the same zone. For more information, see the <code>VpcId</code> parameter description.
/// </p>
pub default_availability_zone: std::option::Option<std::string::String>,
/// <p>The stack's default VPC subnet ID. This parameter is required if you specify a value for the
/// <code>VpcId</code> parameter. All instances are launched into this subnet unless you specify
/// otherwise when you create the instance. If you also specify a value for
/// <code>DefaultAvailabilityZone</code>, the subnet must be in that zone. For information on
/// default values and when this parameter is required, see the <code>VpcId</code> parameter
/// description. </p>
pub default_subnet_id: std::option::Option<std::string::String>,
/// <p>A string that contains user-defined, custom JSON. It is used to override the corresponding default stack configuration JSON values. The string should be in the following format:</p>
/// <p>
/// <code>"{\"key1\": \"value1\", \"key2\": \"value2\",...}"</code>
/// </p>
/// <p>For more information about custom JSON, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html">Use Custom JSON to
/// Modify the Stack Configuration Attributes</a>
/// </p>
pub custom_json: std::option::Option<std::string::String>,
/// <p>The configuration manager. When you clone a stack we recommend that you use the configuration manager to specify the Chef version: 12, 11.10, or 11.4 for Linux stacks, or 12.2 for Windows stacks. The default value for Linux stacks is currently 12.</p>
pub configuration_manager: std::option::Option<crate::model::StackConfigurationManager>,
/// <p>A <code>ChefConfiguration</code> object that specifies whether to enable Berkshelf and the
/// Berkshelf version on Chef 11.10 stacks. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New Stack</a>.</p>
pub chef_configuration: std::option::Option<crate::model::ChefConfiguration>,
/// <p>Whether to use custom cookbooks.</p>
pub use_custom_cookbooks: std::option::Option<bool>,
/// <p>Whether to associate the AWS OpsWorks Stacks built-in security groups with the stack's layers.</p>
/// <p>AWS OpsWorks Stacks provides a standard set of built-in security groups, one for each layer, which are
/// associated with layers by default. With <code>UseOpsworksSecurityGroups</code> you can instead
/// provide your own custom security groups. <code>UseOpsworksSecurityGroups</code> has the
/// following settings: </p>
/// <ul>
/// <li>
/// <p>True - AWS OpsWorks Stacks automatically associates the appropriate built-in security group with each layer (default setting). You can associate additional security groups with a layer after you create it but you cannot delete the built-in security group.</p>
/// </li>
/// <li>
/// <p>False - AWS OpsWorks Stacks does not associate built-in security groups with layers. You must create appropriate Amazon Elastic Compute Cloud (Amazon EC2) security groups and associate a security group with each layer that you create. However, you can still manually associate a built-in security group with a layer on creation; custom security groups are required only for those layers that need custom settings.</p>
/// </li>
/// </ul>
/// <p>For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New
/// Stack</a>.</p>
pub use_opsworks_security_groups: std::option::Option<bool>,
/// <p>Contains the information required to retrieve an app or cookbook from a repository. For more information,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html">Adding Apps</a> or <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook.html">Cookbooks and Recipes</a>.</p>
pub custom_cookbooks_source: std::option::Option<crate::model::Source>,
/// <p>A default Amazon EC2 key pair name. The default value is none. If you specify a key pair name, AWS
/// OpsWorks installs the public key on the instance and you can use the private key with an SSH
/// client to log in to the instance. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-ssh.html"> Using SSH to
/// Communicate with an Instance</a> and <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-ssh-access.html"> Managing SSH
/// Access</a>. You can override this setting by specifying a different key pair, or no key
/// pair, when you <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-add.html">
/// create an instance</a>. </p>
pub default_ssh_key_name: std::option::Option<std::string::String>,
/// <p>Whether to clone the source stack's permissions.</p>
pub clone_permissions: std::option::Option<bool>,
/// <p>A list of source stack app IDs to be included in the cloned stack.</p>
pub clone_app_ids: std::option::Option<std::vec::Vec<std::string::String>>,
/// <p>The default root device type. This value is used by default for all instances in the cloned
/// stack, but you can override it when you create an instance. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device">Storage for the Root Device</a>.</p>
pub default_root_device_type: std::option::Option<crate::model::RootDeviceType>,
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>Auto-update - Set this parameter to <code>LATEST</code>. AWS OpsWorks Stacks
/// automatically installs new agent versions on the stack's instances as soon as
/// they are available.</p>
/// </li>
/// <li>
/// <p>Fixed version - Set this parameter to your preferred agent version. To update
/// the agent version, you must edit the stack configuration and specify a new version.
/// AWS OpsWorks Stacks then automatically installs that version on the stack's instances.</p>
/// </li>
/// </ul>
/// <p>The default setting is <code>LATEST</code>. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>. AgentVersion cannot be set to Chef 12.2.</p>
/// <note>
/// <p>You can also specify an agent version when you create or update an instance, which overrides the stack's default setting.</p>
/// </note>
pub agent_version: std::option::Option<std::string::String>,
}
impl CloneStackInput {
/// <p>The source stack ID.</p>
pub fn source_stack_id(&self) -> std::option::Option<&str> {
self.source_stack_id.as_deref()
}
/// <p>The cloned stack name.</p>
pub fn name(&self) -> std::option::Option<&str> {
self.name.as_deref()
}
/// <p>The cloned stack AWS region, such as "ap-northeast-2". For more information about AWS regions, see
/// <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and Endpoints</a>.</p>
pub fn region(&self) -> std::option::Option<&str> {
self.region.as_deref()
}
/// <p>The ID of the VPC that the cloned stack is to be launched into. It must be in the specified region. All
/// instances are launched into this VPC, and you cannot change the ID later.</p>
/// <ul>
/// <li>
/// <p>If your account supports EC2 Classic, the default value is no VPC.</p>
/// </li>
/// <li>
/// <p>If your account does not support EC2 Classic, the default value is the default VPC for the specified region.</p>
/// </li>
/// </ul>
/// <p>If the VPC ID corresponds to a default VPC and you have specified either the
/// <code>DefaultAvailabilityZone</code> or the <code>DefaultSubnetId</code> parameter only,
/// AWS OpsWorks Stacks infers the value of the other parameter. If you specify neither parameter, AWS OpsWorks Stacks sets
/// these parameters to the first valid Availability Zone for the specified region and the
/// corresponding default VPC subnet ID, respectively. </p>
/// <p>If you specify a nondefault VPC ID, note the following:</p>
/// <ul>
/// <li>
/// <p>It must belong to a VPC in your account that is in the specified region.</p>
/// </li>
/// <li>
/// <p>You must specify a value for <code>DefaultSubnetId</code>.</p>
/// </li>
/// </ul>
/// <p>For more information about how to use AWS OpsWorks Stacks with a VPC, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-vpc.html">Running a Stack in a
/// VPC</a>. For more information about default VPC and EC2 Classic, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html">Supported
/// Platforms</a>. </p>
pub fn vpc_id(&self) -> std::option::Option<&str> {
self.vpc_id.as_deref()
}
/// <p>A list of stack attributes and values as key/value pairs to be added to the cloned stack.</p>
pub fn attributes(
&self,
) -> std::option::Option<
&std::collections::HashMap<crate::model::StackAttributesKeys, std::string::String>,
> {
self.attributes.as_ref()
}
/// <p>The stack AWS Identity and Access Management (IAM) role, which allows AWS OpsWorks Stacks to work with AWS
/// resources on your behalf. You must set this parameter to the Amazon Resource Name (ARN) for an
/// existing IAM role. If you create a stack by using the AWS OpsWorks Stacks console, it creates the role for
/// you. You can obtain an existing stack's IAM ARN programmatically by calling
/// <a>DescribePermissions</a>. For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
/// <note>
/// <p>You must set this parameter to a valid service role ARN or the action will fail; there is no default value. You can specify the source stack's service role ARN, if you prefer, but you must do so explicitly.</p>
/// </note>
pub fn service_role_arn(&self) -> std::option::Option<&str> {
self.service_role_arn.as_deref()
}
/// <p>The Amazon Resource Name (ARN) of an IAM profile that is the default profile for all of the stack's EC2 instances.
/// For more information about IAM ARNs, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html">Using
/// Identifiers</a>.</p>
pub fn default_instance_profile_arn(&self) -> std::option::Option<&str> {
self.default_instance_profile_arn.as_deref()
}
/// <p>The stack's operating system, which must be set to one of the following.</p>
/// <ul>
/// <li>
/// <p>A supported Linux operating system: An Amazon Linux version, such as <code>Amazon Linux 2018.03</code>, <code>Amazon Linux 2017.09</code>, <code>Amazon Linux 2017.03</code>, <code>Amazon Linux
/// 2016.09</code>, <code>Amazon Linux 2016.03</code>, <code>Amazon Linux 2015.09</code>, or <code>Amazon Linux 2015.03</code>.</p>
/// </li>
/// <li>
/// <p>A supported Ubuntu operating system, such as <code>Ubuntu 16.04 LTS</code>, <code>Ubuntu 14.04 LTS</code>, or <code>Ubuntu 12.04 LTS</code>.</p>
/// </li>
/// <li>
/// <p>
/// <code>CentOS Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Red Hat Enterprise Linux 7</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Microsoft Windows Server 2012 R2 Base</code>, <code>Microsoft Windows Server 2012 R2 with SQL Server Express</code>,
/// <code>Microsoft Windows Server 2012 R2 with SQL Server Standard</code>, or <code>Microsoft Windows Server 2012 R2 with SQL Server Web</code>.</p>
/// </li>
/// <li>
/// <p>A custom AMI: <code>Custom</code>. You specify the custom AMI you want to use when
/// you create instances. For more information about how to use custom AMIs with OpsWorks, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html">Using
/// Custom AMIs</a>.</p>
/// </li>
/// </ul>
/// <p>The default option is the parent stack's operating system.
/// For more information about supported operating systems,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html">AWS OpsWorks Stacks Operating Systems</a>.</p>
/// <note>
/// <p>You can specify a different Linux operating system for the cloned stack, but you cannot change from Linux to Windows or Windows to Linux.</p>
/// </note>
pub fn default_os(&self) -> std::option::Option<&str> {
self.default_os.as_deref()
}
/// <p>The stack's host name theme, with spaces are replaced by underscores. The theme is used to
/// generate host names for the stack's instances. By default, <code>HostnameTheme</code> is set
/// to <code>Layer_Dependent</code>, which creates host names by appending integers to the layer's
/// short name. The other themes are:</p>
/// <ul>
/// <li>
/// <p>
/// <code>Baked_Goods</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Clouds</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Europe_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Fruits</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Greek_Deities_and_Titans</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Legendary_creatures_from_Japan</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Planets_and_Moons</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Roman_Deities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Scottish_Islands</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>US_Cities</code>
/// </p>
/// </li>
/// <li>
/// <p>
/// <code>Wild_Cats</code>
/// </p>
/// </li>
/// </ul>
/// <p>To obtain a generated host name, call <code>GetHostNameSuggestion</code>, which returns a
/// host name based on the current theme.</p>
pub fn hostname_theme(&self) -> std::option::Option<&str> {
self.hostname_theme.as_deref()
}
/// <p>The cloned stack's default Availability Zone, which must be in the specified region. For more
/// information, see <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html">Regions and
/// Endpoints</a>. If you also specify a value for <code>DefaultSubnetId</code>, the subnet must
/// be in the same zone. For more information, see the <code>VpcId</code> parameter description.
/// </p>
pub fn default_availability_zone(&self) -> std::option::Option<&str> {
self.default_availability_zone.as_deref()
}
/// <p>The stack's default VPC subnet ID. This parameter is required if you specify a value for the
/// <code>VpcId</code> parameter. All instances are launched into this subnet unless you specify
/// otherwise when you create the instance. If you also specify a value for
/// <code>DefaultAvailabilityZone</code>, the subnet must be in that zone. For information on
/// default values and when this parameter is required, see the <code>VpcId</code> parameter
/// description. </p>
pub fn default_subnet_id(&self) -> std::option::Option<&str> {
self.default_subnet_id.as_deref()
}
/// <p>A string that contains user-defined, custom JSON. It is used to override the corresponding default stack configuration JSON values. The string should be in the following format:</p>
/// <p>
/// <code>"{\"key1\": \"value1\", \"key2\": \"value2\",...}"</code>
/// </p>
/// <p>For more information about custom JSON, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html">Use Custom JSON to
/// Modify the Stack Configuration Attributes</a>
/// </p>
pub fn custom_json(&self) -> std::option::Option<&str> {
self.custom_json.as_deref()
}
/// <p>The configuration manager. When you clone a stack we recommend that you use the configuration manager to specify the Chef version: 12, 11.10, or 11.4 for Linux stacks, or 12.2 for Windows stacks. The default value for Linux stacks is currently 12.</p>
pub fn configuration_manager(
&self,
) -> std::option::Option<&crate::model::StackConfigurationManager> {
self.configuration_manager.as_ref()
}
/// <p>A <code>ChefConfiguration</code> object that specifies whether to enable Berkshelf and the
/// Berkshelf version on Chef 11.10 stacks. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New Stack</a>.</p>
pub fn chef_configuration(&self) -> std::option::Option<&crate::model::ChefConfiguration> {
self.chef_configuration.as_ref()
}
/// <p>Whether to use custom cookbooks.</p>
pub fn use_custom_cookbooks(&self) -> std::option::Option<bool> {
self.use_custom_cookbooks
}
/// <p>Whether to associate the AWS OpsWorks Stacks built-in security groups with the stack's layers.</p>
/// <p>AWS OpsWorks Stacks provides a standard set of built-in security groups, one for each layer, which are
/// associated with layers by default. With <code>UseOpsworksSecurityGroups</code> you can instead
/// provide your own custom security groups. <code>UseOpsworksSecurityGroups</code> has the
/// following settings: </p>
/// <ul>
/// <li>
/// <p>True - AWS OpsWorks Stacks automatically associates the appropriate built-in security group with each layer (default setting). You can associate additional security groups with a layer after you create it but you cannot delete the built-in security group.</p>
/// </li>
/// <li>
/// <p>False - AWS OpsWorks Stacks does not associate built-in security groups with layers. You must create appropriate Amazon Elastic Compute Cloud (Amazon EC2) security groups and associate a security group with each layer that you create. However, you can still manually associate a built-in security group with a layer on creation; custom security groups are required only for those layers that need custom settings.</p>
/// </li>
/// </ul>
/// <p>For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-creating.html">Create a New
/// Stack</a>.</p>
pub fn use_opsworks_security_groups(&self) -> std::option::Option<bool> {
self.use_opsworks_security_groups
}
/// <p>Contains the information required to retrieve an app or cookbook from a repository. For more information,
/// see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html">Adding Apps</a> or <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook.html">Cookbooks and Recipes</a>.</p>
pub fn custom_cookbooks_source(&self) -> std::option::Option<&crate::model::Source> {
self.custom_cookbooks_source.as_ref()
}
/// <p>A default Amazon EC2 key pair name. The default value is none. If you specify a key pair name, AWS
/// OpsWorks installs the public key on the instance and you can use the private key with an SSH
/// client to log in to the instance. For more information, see <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-ssh.html"> Using SSH to
/// Communicate with an Instance</a> and <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/security-ssh-access.html"> Managing SSH
/// Access</a>. You can override this setting by specifying a different key pair, or no key
/// pair, when you <a href="https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-add.html">
/// create an instance</a>. </p>
pub fn default_ssh_key_name(&self) -> std::option::Option<&str> {
self.default_ssh_key_name.as_deref()
}
/// <p>Whether to clone the source stack's permissions.</p>
pub fn clone_permissions(&self) -> std::option::Option<bool> {
self.clone_permissions
}
/// <p>A list of source stack app IDs to be included in the cloned stack.</p>
pub fn clone_app_ids(&self) -> std::option::Option<&[std::string::String]> {
self.clone_app_ids.as_deref()
}
/// <p>The default root device type. This value is used by default for all instances in the cloned
/// stack, but you can override it when you create an instance. For more information, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device">Storage for the Root Device</a>.</p>
pub fn default_root_device_type(&self) -> std::option::Option<&crate::model::RootDeviceType> {
self.default_root_device_type.as_ref()
}
/// <p>The default AWS OpsWorks Stacks agent version. You have the following options:</p>
/// <ul>
/// <li>
/// <p>Auto-update - Set this parameter to <code>LATEST</code>. AWS OpsWorks Stacks
/// automatically installs new agent versions on the stack's instances as soon as
/// they are available.</p>
/// </li>
/// <li>
/// <p>Fixed version - Set this parameter to your preferred agent version. To update
/// the agent version, you must edit the stack configuration and specify a new version.
/// AWS OpsWorks Stacks then automatically installs that version on the stack's instances.</p>
/// </li>
/// </ul>
/// <p>The default setting is <code>LATEST</code>. To specify an agent version,
/// you must use the complete version number, not the abbreviated number shown on the console.
/// For a list of available agent version numbers, call <a>DescribeAgentVersions</a>. AgentVersion cannot be set to Chef 12.2.</p>
/// <note>
/// <p>You can also specify an agent version when you create or update an instance, which overrides the stack's default setting.</p>
/// </note>
pub fn agent_version(&self) -> std::option::Option<&str> {
self.agent_version.as_deref()
}
}
impl std::fmt::Debug for CloneStackInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("CloneStackInput");
formatter.field("source_stack_id", &self.source_stack_id);
formatter.field("name", &self.name);
formatter.field("region", &self.region);
formatter.field("vpc_id", &self.vpc_id);
formatter.field("attributes", &self.attributes);
formatter.field("service_role_arn", &self.service_role_arn);
formatter.field(
"default_instance_profile_arn",
&self.default_instance_profile_arn,
);
formatter.field("default_os", &self.default_os);
formatter.field("hostname_theme", &self.hostname_theme);
formatter.field("default_availability_zone", &self.default_availability_zone);
formatter.field("default_subnet_id", &self.default_subnet_id);
formatter.field("custom_json", &self.custom_json);
formatter.field("configuration_manager", &self.configuration_manager);
formatter.field("chef_configuration", &self.chef_configuration);
formatter.field("use_custom_cookbooks", &self.use_custom_cookbooks);
formatter.field(
"use_opsworks_security_groups",
&self.use_opsworks_security_groups,
);
formatter.field("custom_cookbooks_source", &self.custom_cookbooks_source);
formatter.field("default_ssh_key_name", &self.default_ssh_key_name);
formatter.field("clone_permissions", &self.clone_permissions);
formatter.field("clone_app_ids", &self.clone_app_ids);
formatter.field("default_root_device_type", &self.default_root_device_type);
formatter.field("agent_version", &self.agent_version);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct AttachElasticLoadBalancerInput {
/// <p>The Elastic Load Balancing instance's name.</p>
pub elastic_load_balancer_name: std::option::Option<std::string::String>,
/// <p>The ID of the layer to which the Elastic Load Balancing instance is to be attached.</p>
pub layer_id: std::option::Option<std::string::String>,
}
impl AttachElasticLoadBalancerInput {
/// <p>The Elastic Load Balancing instance's name.</p>
pub fn elastic_load_balancer_name(&self) -> std::option::Option<&str> {
self.elastic_load_balancer_name.as_deref()
}
/// <p>The ID of the layer to which the Elastic Load Balancing instance is to be attached.</p>
pub fn layer_id(&self) -> std::option::Option<&str> {
self.layer_id.as_deref()
}
}
impl std::fmt::Debug for AttachElasticLoadBalancerInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("AttachElasticLoadBalancerInput");
formatter.field(
"elastic_load_balancer_name",
&self.elastic_load_balancer_name,
);
formatter.field("layer_id", &self.layer_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct AssociateElasticIpInput {
/// <p>The Elastic IP address.</p>
pub elastic_ip: std::option::Option<std::string::String>,
/// <p>The instance ID.</p>
pub instance_id: std::option::Option<std::string::String>,
}
impl AssociateElasticIpInput {
/// <p>The Elastic IP address.</p>
pub fn elastic_ip(&self) -> std::option::Option<&str> {
self.elastic_ip.as_deref()
}
/// <p>The instance ID.</p>
pub fn instance_id(&self) -> std::option::Option<&str> {
self.instance_id.as_deref()
}
}
impl std::fmt::Debug for AssociateElasticIpInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("AssociateElasticIpInput");
formatter.field("elastic_ip", &self.elastic_ip);
formatter.field("instance_id", &self.instance_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct AssignVolumeInput {
/// <p>The volume ID.</p>
pub volume_id: std::option::Option<std::string::String>,
/// <p>The instance ID.</p>
pub instance_id: std::option::Option<std::string::String>,
}
impl AssignVolumeInput {
/// <p>The volume ID.</p>
pub fn volume_id(&self) -> std::option::Option<&str> {
self.volume_id.as_deref()
}
/// <p>The instance ID.</p>
pub fn instance_id(&self) -> std::option::Option<&str> {
self.instance_id.as_deref()
}
}
impl std::fmt::Debug for AssignVolumeInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("AssignVolumeInput");
formatter.field("volume_id", &self.volume_id);
formatter.field("instance_id", &self.instance_id);
formatter.finish()
}
}
#[allow(missing_docs)] // documentation missing in model
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct AssignInstanceInput {
/// <p>The instance ID.</p>
pub instance_id: std::option::Option<std::string::String>,
/// <p>The layer ID, which must correspond to a custom layer. You cannot assign a registered instance to a built-in layer.</p>
pub layer_ids: std::option::Option<std::vec::Vec<std::string::String>>,
}
impl AssignInstanceInput {
/// <p>The instance ID.</p>
pub fn instance_id(&self) -> std::option::Option<&str> {
self.instance_id.as_deref()
}
/// <p>The layer ID, which must correspond to a custom layer. You cannot assign a registered instance to a built-in layer.</p>
pub fn layer_ids(&self) -> std::option::Option<&[std::string::String]> {
self.layer_ids.as_deref()
}
}
impl std::fmt::Debug for AssignInstanceInput {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("AssignInstanceInput");
formatter.field("instance_id", &self.instance_id);
formatter.field("layer_ids", &self.layer_ids);
formatter.finish()
}
}
|
Age differences in hopelessness and toe pain in persons with insulin-dependent and non-insulin-dependent diabetes mellitus. BACKGROUND Several studies have established an association between diabetic neuropathy and depressive symptoms. There is a link between depression and peripheral neuropathy in diabetic patients, suggesting an increased likelihood that diabetic patients will experience depressive symptoms related to lower-extremity peripheral neuropathy and arthritis during middle age and later life. The goal of this investigation was to determine whether there are age differences between insulin-dependent and non-insulin-dependent diabetic patients regarding their feelings of hopelessness and toe pain. METHODS A large population-based sample of 32,006 adults from the 1998 National Health Interview Survey was analyzed with multivariate statistical procedures. We performed and correlation procedures to test the null hypothesis that there are no age or sex differences between insulin-dependent and non-insulin-dependent diabetic patients in their reporting of feelings of hopelessness and toe pain symptoms in the previous 12 months. RESULTS There were significant differences between age and sex groups of insulin-dependent and non-insulin-dependent diabetic patients in reporting feelings of hopelessness and toe pain symptoms, rejecting the null hypothesis. Correlational analysis conducted between the variables of hopelessness and toe pain yielded significant correlations in insulin-dependent (r =.28; P =.0009; =.05), and non-insulin-dependent (r = 0.19; P =.001; =.05) women older than 61 years, concluding that diabetic women are more likely to experience hopelessness and toe pain in that age group regardless of insulin status. CONCLUSIONS Clinicians should incorporate depression and toe pain symptoms into their assessment and treatment, especially in diabetic women older than 61 years. |
Horvath said “For Use” items are for educational purposes and aren’t in the museum’s permanent collections because they lack “provenance,” a history of ownership. Since he started at the museum 15 years ago, he has been accepting donations or buying pieces for talks and hands-on events.
“I’ve always wanted to do something for the kids, but you have to have the stuff to do it,” Horvath said.
Saturday’s event, from 10 a.m. to 3 p.m., will include volunteers to help preserve order and assist children with dressing.
“You want a sailor’s cap to go with a sailor’s uniform, not an Army one,” Horvath said.
In addition to a military-looking scene with camouflage netting and ammo crates, another backdrop will feature a Marine tactical mountain bicycle from the 1990s. For a third, parents can choose one of the service branch flags.
Elsewhere in the museum will be educational stations. At one, an Air Force veteran will answer questions about the World War II and Cold War planes represented by various models.
At another, children can heft a rare German machine gun from World War I. Brought to Tyrone under unknown circumstances, it’s in the “For Use” collection because it lacks a tie to Pennsylvania history.
Reduced admission for the day is $2 for children and $5.50 for adults. Parents must be present for children to try on uniforms.
Horvath said he hopes the day’s discoveries lead to more interest in learning about and preserving military history.
“There’s also a selfish aspect to this, in that we might create the future ranks of historians, curators and educators,” he said.
The Pennsylvania Military Museum is in Boalsburg, off U.S. Route 322 Business. For more information, visit www.pamilmuseum.org or call 466-6263. |
Schistosoma japonicum: some parameters affecting the development of protective immunity induced by a cryopreserved, irradiated schistosomula vaccine in guinea-pigs SUMMARY Experiments were conducted in guinea-pigs to elucidate the parameters affecting the development of protective immunity against Schistosoma japonicum induced by a cryopreserved, irradiated schistosomula vaccine such as the number of immunizations, route of injection and the use of adjuvants. Results obtained indicated that the cryopreserved, irradiated schistosomula vaccine was effective by either intradermal or intramuscular injection. One intradermal injection with BCG adjuvant resulted in an average worm reduction of 5024%, only a little lower than that of a non-cryopreserved, irradiated vaccine, 5355%, with no statistically significant difference between the two. By intramuscular injection the worm reduction was lower (max. 40%) whether given with or without adjuvants or in 1 or 2 injections. |
// GetISA returns the isa identified by "id".
// Returns nil, nil if not found
func (c *isaRepoV3) GetISA(ctx context.Context, id dssmodels.ID) (*ridmodels.IdentificationServiceArea, error) {
var query = fmt.Sprintf(`
SELECT %s FROM
identification_service_areas
WHERE
id = $1`, isaFieldsV3)
return c.processOne(ctx, query, id)
} |
Effects of potassium carbonate as an alternative road de-icer to sodium chloride on soil chemical properties Summary The effect of potassium carbonate on soil chemical characteristics was compared with that of the most common de-icer, sodium chloride, in a 4-yr outdoor pot experiment with poplar and lime trees. Soil pH was raised more by K 2 C0 3 than by NaCl. Potassium carbonate increased the electrical conductivity mainly in the upper soil layers. When K 2 C0 3 was applied at an average annual dose of 154 g m -2, only the water-soluble fractions of calcium and magnesium were affected. At an average annual dose of 617 g m -2, total potassium increased by 33% and calcium was displaced from the exchange sites. Calcium saturation was reduced from 85% of the cation exchange capacity in the untreated control to 69% in the higher dose K 2 C0 3 treatment and to 75% in the NaCl treatment. The results show that the negative impact of K 2 C0 3 on soil chemical and osmotic properties is as high as that of NaCl. For plants, however, potassium carbonate in contrast to chloride is not toxic and, applied in moderate doses, may even remedy potassium deficiencies in roadside trees. |
Blockchain And Legal Protection of Copyright Technical Protection Measures for Publications in the Internet Era In todays difficult digital copyright era, the communication revolution driven by Internet technology has caused problems such as the weakening of the value foundation of publication copyright works, the proliferation of copyright infringement, and the weakening of copyright awareness. Blockchain is a decentralized and trustless maintenance data. Reliable technology is therefore considered to be the "life-saving straw" for publication copyright. This article uses blockchain technology to re-examine and improve publication copyright protection and legal application in the new era of the Internet, and build a system of balance and integration of publication copyright protection and blockchain technology. |
On Self-Presentation: The Relational versus Transactional Dimensions of Job Seeking in a Technologically Mediated Labor Market This paper presents a novel investigation of applicant job seeking actions in a contemporary technology-mediated labor market. Job seeking is conceptualized as a form of self-presentation, normatively circumscribed by three components: the applicant, employer, and task. How job applicants present themselves to employers should therefore vary along relational (applicant to employer) and transactional (applicant to task) dimensions. Successful self-presentation along these two dimensions is hypothesized to differ as a function of the job seekers demonstrable ability. When an applicant lacks cues of their ability to perform the job being considered for, relational presentation is likely disadvantageous. Instead, transactional presentation should be more successful because they ameliorate ability concerns. Conversely, relational presentations should be preferred under conditions of demonstrable ability because, conditional on being able to perform the job, nicer applicants should be preferred. Support is provided through unsupervised machine learning and sentiment analyses of over 9.6 million written job proposals by over 300,000 applicants on an online platform for gig-economy freelancers. Regression analyses support the contentions. Contributions to hiring, technology mediated markets and linguistic text-analysis are discussed. |
/*
* Copyright 2012 Netflix, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.netflix.exhibitor.core;
import com.google.common.base.Preconditions;
import org.apache.curator.framework.api.ACLProvider;
import com.netflix.exhibitor.core.config.JQueryStyle;
import com.netflix.exhibitor.core.servo.ServoRegistration;
import com.sun.jersey.api.client.Client;
public class ExhibitorArguments
{
final int connectionTimeOutMs;
final int logWindowSizeLines;
final int configCheckMs;
final String extraHeadingText;
final String thisJVMHostname;
final boolean allowNodeMutations;
final JQueryStyle jQueryStyle;
final int restPort;
final String restPath;
final String restScheme;
final Runnable shutdownProc;
final LogDirection logDirection;
final ACLProvider aclProvider;
final ServoRegistration servoRegistration;
final String preferencesPath;
final RemoteConnectionConfiguration remoteConnectionConfiguration;
public enum LogDirection
{
NATURAL,
INVERTED
}
public static class Builder
{
private ExhibitorArguments arguments = new ExhibitorArguments();
/**
* @param connectionTimeOutMs the connection time to pass use when making internal connections to ZK, etc.
* @return this
*/
public Builder connectionTimeOutMs(int connectionTimeOutMs)
{
arguments = new ExhibitorArguments(connectionTimeOutMs, arguments.logWindowSizeLines, arguments.configCheckMs, arguments.extraHeadingText, arguments.thisJVMHostname, arguments.allowNodeMutations, arguments.jQueryStyle, arguments.restPort, arguments.restPath, arguments.restScheme, arguments.shutdownProc, arguments.logDirection, arguments.aclProvider, arguments.servoRegistration, arguments.preferencesPath, arguments.remoteConnectionConfiguration);
return this;
}
/**
* @param logWindowSizeLines max lines for the log
* @return this
*/
public Builder logWindowSizeLines(int logWindowSizeLines)
{
arguments = new ExhibitorArguments(arguments.connectionTimeOutMs, logWindowSizeLines, arguments.configCheckMs, arguments.extraHeadingText, arguments.thisJVMHostname, arguments.allowNodeMutations, arguments.jQueryStyle, arguments.restPort, arguments.restPath, arguments.restScheme, arguments.shutdownProc, arguments.logDirection, arguments.aclProvider, arguments.servoRegistration, arguments.preferencesPath, arguments.remoteConnectionConfiguration);
return this;
}
/**
* @param configCheckMs how often to check for shared config changes
* @return this
*/
public Builder configCheckMs(int configCheckMs)
{
arguments = new ExhibitorArguments(arguments.connectionTimeOutMs, arguments.logWindowSizeLines, configCheckMs, arguments.extraHeadingText, arguments.thisJVMHostname, arguments.allowNodeMutations, arguments.jQueryStyle, arguments.restPort, arguments.restPath, arguments.restScheme, arguments.shutdownProc, arguments.logDirection, arguments.aclProvider, arguments.servoRegistration, arguments.preferencesPath, arguments.remoteConnectionConfiguration);
return this;
}
/**
* @param extraHeadingText any extra text to display in the web UI
* @return this
*/
public Builder extraHeadingText(String extraHeadingText)
{
arguments = new ExhibitorArguments(arguments.connectionTimeOutMs, arguments.logWindowSizeLines, arguments.configCheckMs, extraHeadingText, arguments.thisJVMHostname, arguments.allowNodeMutations, arguments.jQueryStyle, arguments.restPort, arguments.restPath, arguments.restScheme, arguments.shutdownProc, arguments.logDirection, arguments.aclProvider, arguments.servoRegistration, arguments.preferencesPath, arguments.remoteConnectionConfiguration);
return this;
}
/**
* @param thisJVMHostname the hostname of this instance/JVM
* @return this
*/
public Builder thisJVMHostname(String thisJVMHostname)
{
arguments = new ExhibitorArguments(arguments.connectionTimeOutMs, arguments.logWindowSizeLines, arguments.configCheckMs, arguments.extraHeadingText, thisJVMHostname, arguments.allowNodeMutations, arguments.jQueryStyle, arguments.restPort, arguments.restPath, arguments.restScheme, arguments.shutdownProc, arguments.logDirection, arguments.aclProvider, arguments.servoRegistration, arguments.preferencesPath, arguments.remoteConnectionConfiguration);
return this;
}
/**
* @param allowNodeMutations if true, the web UI will enable the modification button in the Explorer
* @return this
*/
public Builder allowNodeMutations(boolean allowNodeMutations)
{
arguments = new ExhibitorArguments(arguments.connectionTimeOutMs, arguments.logWindowSizeLines, arguments.configCheckMs, arguments.extraHeadingText, arguments.thisJVMHostname, allowNodeMutations, arguments.jQueryStyle, arguments.restPort, arguments.restPath, arguments.restScheme, arguments.shutdownProc, arguments.logDirection, arguments.aclProvider, arguments.servoRegistration, arguments.preferencesPath, arguments.remoteConnectionConfiguration);
return this;
}
/**
* @param jQueryStyle the style to use for the web UI
* @return this
*/
public Builder jQueryStyle(JQueryStyle jQueryStyle)
{
arguments = new ExhibitorArguments(arguments.connectionTimeOutMs, arguments.logWindowSizeLines, arguments.configCheckMs, arguments.extraHeadingText, arguments.thisJVMHostname, arguments.allowNodeMutations, jQueryStyle, arguments.restPort, arguments.restPath, arguments.restScheme, arguments.shutdownProc, arguments.logDirection, arguments.aclProvider, arguments.servoRegistration, arguments.preferencesPath, arguments.remoteConnectionConfiguration);
return this;
}
/**
* @param restPort port that Exhibitor REST calls listen on
* @return this
*/
public Builder restPort(int restPort)
{
arguments = new ExhibitorArguments(arguments.connectionTimeOutMs, arguments.logWindowSizeLines, arguments.configCheckMs, arguments.extraHeadingText, arguments.thisJVMHostname, arguments.allowNodeMutations, arguments.jQueryStyle, restPort, arguments.restPath, arguments.restScheme, arguments.shutdownProc, arguments.logDirection, arguments.aclProvider, arguments.servoRegistration, arguments.preferencesPath, arguments.remoteConnectionConfiguration);
return this;
}
/**
* @param restPath additional path portion of REST calls
* @return this
*/
public Builder restPath(String restPath)
{
arguments = new ExhibitorArguments(arguments.connectionTimeOutMs, arguments.logWindowSizeLines, arguments.configCheckMs, arguments.extraHeadingText, arguments.thisJVMHostname, arguments.allowNodeMutations, arguments.jQueryStyle, arguments.restPort, restPath, arguments.restScheme, arguments.shutdownProc, arguments.logDirection, arguments.aclProvider, arguments.servoRegistration, arguments.preferencesPath, arguments.remoteConnectionConfiguration);
return this;
}
/**
* @param restScheme http or https
* @return this
*/
public Builder restScheme(String restScheme)
{
arguments = new ExhibitorArguments(arguments.connectionTimeOutMs, arguments.logWindowSizeLines, arguments.configCheckMs, arguments.extraHeadingText, arguments.thisJVMHostname, arguments.allowNodeMutations, arguments.jQueryStyle, arguments.restPort, arguments.restPath, restScheme, arguments.shutdownProc, arguments.logDirection, arguments.aclProvider, arguments.servoRegistration, arguments.preferencesPath, arguments.remoteConnectionConfiguration);
return this;
}
/**
* @param shutdownProc functor used to shutdown the Exhibitor service
* @return this
*/
public Builder shutdownProc(Runnable shutdownProc)
{
arguments = new ExhibitorArguments(arguments.connectionTimeOutMs, arguments.logWindowSizeLines, arguments.configCheckMs, arguments.extraHeadingText, arguments.thisJVMHostname, arguments.allowNodeMutations, arguments.jQueryStyle, arguments.restPort, arguments.restPath, arguments.restScheme, shutdownProc, arguments.logDirection, arguments.aclProvider, arguments.servoRegistration, arguments.preferencesPath, arguments.remoteConnectionConfiguration);
return this;
}
/**
* @param logDirection change the display direction for Exhibitor logs
* @return this
*/
public Builder logDirection(LogDirection logDirection)
{
logDirection = Preconditions.checkNotNull(logDirection, "logDirection cannot be null");
arguments = new ExhibitorArguments(arguments.connectionTimeOutMs, arguments.logWindowSizeLines, arguments.configCheckMs, arguments.extraHeadingText, arguments.thisJVMHostname, arguments.allowNodeMutations, arguments.jQueryStyle, arguments.restPort, arguments.restPath, arguments.restScheme, arguments.shutdownProc, logDirection, arguments.aclProvider, arguments.servoRegistration, arguments.preferencesPath, arguments.remoteConnectionConfiguration);
return this;
}
/**
* If your ZooKeeper cluster has ACL enabled you need to set the ACL in Exhibitor so that it can successfully connect to the cluster
*
* @param aclProvider the acl provider to use
* @return this
*/
public Builder aclProvider(ACLProvider aclProvider)
{
arguments = new ExhibitorArguments(arguments.connectionTimeOutMs, arguments.logWindowSizeLines, arguments.configCheckMs, arguments.extraHeadingText, arguments.thisJVMHostname, arguments.allowNodeMutations, arguments.jQueryStyle, arguments.restPort, arguments.restPath, arguments.restScheme, arguments.shutdownProc, arguments.logDirection, aclProvider, arguments.servoRegistration, arguments.preferencesPath, arguments.remoteConnectionConfiguration);
return this;
}
/**
* To add Netflix Servo support pass the servo registration information
*
* @param servoRegistration servo details
* @return this
*/
public Builder servoRegistration(ServoRegistration servoRegistration)
{
arguments = new ExhibitorArguments(arguments.connectionTimeOutMs, arguments.logWindowSizeLines, arguments.configCheckMs, arguments.extraHeadingText, arguments.thisJVMHostname, arguments.allowNodeMutations, arguments.jQueryStyle, arguments.restPort, arguments.restPath, arguments.restScheme, arguments.shutdownProc, arguments.logDirection, arguments.aclProvider, servoRegistration, arguments.preferencesPath, arguments.remoteConnectionConfiguration);
return this;
}
/**
* Certain values (such as Control Panel values) are stored in a preferences file. By default, <code>Preferences.userRoot()</code> is used. Use this
* method to specify a different file path.
*
* @param preferencesPath path for the preferences file
* @return this
*/
public Builder preferencesPath(String preferencesPath)
{
arguments = new ExhibitorArguments(arguments.connectionTimeOutMs, arguments.logWindowSizeLines, arguments.configCheckMs, arguments.extraHeadingText, arguments.thisJVMHostname, arguments.allowNodeMutations, arguments.jQueryStyle, arguments.restPort, arguments.restPath, arguments.restScheme, arguments.shutdownProc, arguments.logDirection, arguments.aclProvider, arguments.servoRegistration, preferencesPath, arguments.remoteConnectionConfiguration);
return this;
}
/**
* Exhibitor remotely connects to each of the instances in the ensemble. The RemoteConnectionConfiguration specifies
* configuration values for the remote client (which uses the Jersey {@link Client})
*
* @param remoteConnectionConfiguration remote connection configuration
* @return this
*/
public Builder remoteConnectionConfiguration(RemoteConnectionConfiguration remoteConnectionConfiguration)
{
arguments = new ExhibitorArguments(arguments.connectionTimeOutMs, arguments.logWindowSizeLines, arguments.configCheckMs, arguments.extraHeadingText, arguments.thisJVMHostname, arguments.allowNodeMutations, arguments.jQueryStyle, arguments.restPort, arguments.restPath, arguments.restScheme, arguments.shutdownProc, arguments.logDirection, arguments.aclProvider, arguments.servoRegistration, arguments.preferencesPath, remoteConnectionConfiguration);
return this;
}
public ExhibitorArguments build()
{
Preconditions.checkArgument(arguments.thisJVMHostname != null, "thisJVMHostname cannot be null");
Preconditions.checkArgument(arguments.connectionTimeOutMs > 0, "connectionTimeOutMs must be a positive number");
Preconditions.checkArgument(arguments.logWindowSizeLines > 0, "logWindowSizeLines must be a positive number");
Preconditions.checkArgument(arguments.configCheckMs > 0, "configCheckMs must be a positive number");
Preconditions.checkArgument(arguments.restPort > 0, "restPort must be a positive number");
Preconditions.checkArgument(arguments.restPath != null, "restPath cannot be null");
Preconditions.checkArgument(arguments.remoteConnectionConfiguration != null, "remoteConnectionConfiguration cannot be null");
return arguments;
}
private Builder()
{
}
}
public static Builder builder()
{
return new Builder();
}
private ExhibitorArguments()
{
this(30000, 1000, 5000, null, null, false, JQueryStyle.RED, 0, "/", "http", null, LogDirection.INVERTED, null, null, null, new RemoteConnectionConfiguration());
}
public ExhibitorArguments(int connectionTimeOutMs, int logWindowSizeLines, int configCheckMs, String extraHeadingText, String thisJVMHostname, boolean allowNodeMutations, JQueryStyle jQueryStyle, int restPort, String restPath, String restScheme, Runnable shutdownProc, LogDirection logDirection, ACLProvider aclProvider, ServoRegistration servoRegistration, String preferencesPath, RemoteConnectionConfiguration remoteConnectionConfiguration)
{
this.connectionTimeOutMs = connectionTimeOutMs;
this.logWindowSizeLines = logWindowSizeLines;
this.configCheckMs = configCheckMs;
this.extraHeadingText = extraHeadingText;
this.thisJVMHostname = thisJVMHostname;
this.allowNodeMutations = allowNodeMutations;
this.jQueryStyle = jQueryStyle;
this.restPort = restPort;
this.restPath = restPath;
this.restScheme = restScheme;
this.shutdownProc = shutdownProc;
this.logDirection = logDirection;
this.aclProvider = aclProvider;
this.servoRegistration = servoRegistration;
this.preferencesPath = preferencesPath;
this.remoteConnectionConfiguration = remoteConnectionConfiguration;
}
}
|
Grafting reaction of organotin complexes on silica catalyzed by tungstic heteropolyacids. The grafting reaction of tetramethyltin on silica is catalyzed by HSiWO preliminary impregnated on the support. While the reaction proceeds at temperatures higher than 150 degrees C on silica alone, the presence of the polyacid allows the grafting at room temperature. A study as a function of the polyacid coverage has shown that there is a direct correlation between the reaction rate and the number of highly acidic sites on the support, probing that there is a reaction of the tetraalkyltin with them (limiting step) followed by a migration of the grafted fragment on the silica surface. Not only monografted species (as observed on silica) but also multigrafted tin species are formed because of further reactions of the grafted fragments. |
<reponame>zhangliangnbu/algorithm-demo2<filename>src/com/liang/algo/common/SortColors.java<gh_stars>0
package com.liang.algo.common;
public class SortColors {
public void sortColors(int[] nums) {
if (nums == null || nums.length == 0) {
return;
}
// count
int redEnd = -1, whiteEnd = -1, blueEnd = -1;
for (int val : nums) {
if (val == 0) {
redEnd ++;
nums[redEnd] = 0;
whiteEnd ++;
if (redEnd + 1 <= whiteEnd) {
nums[whiteEnd] = 1;
}
blueEnd ++;
if (whiteEnd + 1 <= blueEnd) {
nums[blueEnd] = 2;
}
} else if (val == 1) {
whiteEnd ++;
nums[whiteEnd] = 1;
blueEnd ++;
if (whiteEnd + 1 <= blueEnd) {
nums[blueEnd] = 2;
}
} else {
blueEnd ++;
}
}
}
}
|
def cast_conf_or_unc(
cls, as_confidence: Union[None, bool], superv_scores: np.ndarray
) -> np.ndarray:
if as_confidence is not None and cls.is_confidence() != as_confidence:
return superv_scores * -1
return superv_scores |
/* eslint-disable @typescript-eslint/no-explicit-any */
/* eslint-disable @typescript-eslint/explicit-module-boundary-types */
/* eslint-disable @typescript-eslint/no-unsafe-argument */
export const getEnumKeyFromValue = (
key: any,
enumeration: any
): typeof enumeration =>
Object.keys(enumeration).find((enumKey) => enumeration[enumKey] === key);
export const isValueInEnum = (value: any, enumeration: any) =>
Object.values(enumeration).includes(value);
export const mapValueBetweenEnums = (
value: any,
sourceEnum: any,
destinationEnum: any
): any => {
if (isValueInEnum(value, destinationEnum)) {
return value;
}
return destinationEnum[getEnumKeyFromValue(value, sourceEnum)] || null;
};
|
# pyre-strict
"""
Local task runner
"""
import logging
import math
import os
import json
import copy
import traceback
import glob
import time
import random
import multiprocessing
import threading
import shutil
import shlex
import re
from abc import ABC, abstractmethod
from typing import Tuple, List, Dict, Optional, Callable, Iterable, Set, Any
import psutil
import docker
from .. import Error, Type, Env, Value, StdLib, Tree, _util
from .._util import (
write_values_json,
provision_run_dir,
LOGGING_FORMAT,
PygtailLogger,
TerminationSignalFlag,
parse_byte_size,
chmod_R_plus,
path_really_within,
LoggingFileHandler,
AtomicCounter,
)
from .._util import StructuredLogMessage as _
from .download import able as downloadable, run as download
from .error import *
class TaskContainer(ABC):
"""
Base class for task containers, subclassed by runtime-specific
implementations (e.g. Docker).
"""
run_id: str
host_dir: str
"""
:type: str
The host path to the scratch directory that will be mounted inside the
container.
"""
container_dir: str
"""
:type: str
The scratch directory's mounted path inside the container. The task
command's working directory will be ``{container_dir}/work/``.
"""
input_file_map: Dict[str, str]
"""
:type: Dict[str,str]
A mapping of host input file paths to in-container mounted paths,
maintained by ``add_files``.
"""
input_file_map_rev: Dict[str, str]
_running: bool
def __init__(self, run_id: str, host_dir: str) -> None:
self.run_id = run_id
self.host_dir = host_dir
self.container_dir = "/mnt/miniwdl_task_container"
self.input_file_map = {}
self.input_file_map_rev = {}
self._running = False
os.makedirs(os.path.join(self.host_dir, "work"))
def add_files(self, host_files: Iterable[str]) -> None:
"""
Use before running the container to add a list of host files to mount
inside the container as inputs. The host-to-container path mapping is
maintained in ``input_file_map``.
Although ``add_files`` can be used multiple times, files should be
added together where possible, as this allows heuristics for dealing
with any name collisions among them.
"""
assert not self._running
# partition the files by host directory
host_files_by_dir = {}
for host_file in host_files:
if host_file not in self.input_file_map:
if not os.path.isfile(host_file):
raise Error.InputError("input file not found: " + host_file)
host_files_by_dir.setdefault(os.path.dirname(host_file), set()).add(host_file)
# for each such partition of files
# - if there are no basename collisions under input subdirectory 0, then mount them there.
# - otherwise, mount them in a fresh subdirectory
for files in host_files_by_dir.values():
based = os.path.join(self.container_dir, "work/_miniwdl_inputs")
subd = "0"
for host_file in files:
container_file = os.path.join(based, subd, os.path.basename(host_file))
if container_file in self.input_file_map_rev:
subd = str(len(self.input_file_map) + 1)
for host_file in files:
container_file = os.path.join(based, subd, os.path.basename(host_file))
assert container_file not in self.input_file_map_rev
self.input_file_map[host_file] = container_file
self.input_file_map_rev[container_file] = host_file
def copy_input_files(self, logger: logging.Logger) -> None:
# After add_files has been used as needed, copy the input files from their original
# locations to the appropriate subdirectories of the container working directory. This may
# not be necessary e.g. if the container implementation supports bind-mounting the input
# files from their original host paths.
for host_filename, container_filename in self.input_file_map.items():
assert container_filename.startswith(self.container_dir)
host_copy_filename = os.path.join(
self.host_dir, os.path.relpath(container_filename, self.container_dir)
)
logger.info(_("copy host input file", input=host_filename, copy=host_copy_filename))
os.makedirs(os.path.dirname(host_copy_filename), exist_ok=True)
shutil.copy(host_filename, host_copy_filename)
def run(self, logger: logging.Logger, command: str, cpu: int, memory: int) -> None:
"""
1. Container is instantiated with the configured mounts
2. The mounted directory and all subdirectories have u+rwx,g+rwx permission bits; all files
within have u+rw,g+rw permission bits.
3. Command is executed in ``{host_dir}/work/`` (where {host_dir} is mounted to
{container_dir} inside the container)
4. Standard output is written to ``{host_dir}/stdout.txt``
5. Standard error is written to ``{host_dir}/stderr.txt`` and logged at VERBOSE level
6. Raises CommandFailed for nonzero exit code, or any other error
The container is torn down in any case, including SIGTERM/SIGHUP signal which is trapped.
"""
# container-specific logic should be in _run(). this wrapper traps signals
assert not self._running
if command.strip(): # if the command is empty then don't bother with any of this
with TerminationSignalFlag(logger) as terminating:
if terminating():
raise Terminated()
self._running = True
try:
exit_status = self._run(logger, terminating, command, cpu, memory)
finally:
self._running = False
if exit_status != 0:
raise CommandFailed(
exit_status, os.path.join(self.host_dir, "stderr.txt")
) if not terminating() else Terminated()
@abstractmethod
def _run(
self,
logger: logging.Logger,
terminating: Callable[[], bool],
command: str,
cpu: int,
memory: int,
) -> int:
# run command in container & return exit status
raise NotImplementedError()
def reset(self, logger: logging.Logger, prev_retries: int) -> None:
"""
After a container/command failure, reset the working directory state so that
copy_input_files() and run() can be retried.
"""
artifacts_dir = os.path.join(self.host_dir, "failed_tries", str(prev_retries))
artifacts_moved = []
for artifact in ["work", "command", "stdout.txt", "stderr.txt", "stderr.txt.offset"]:
src = os.path.join(self.host_dir, artifact)
if os.path.exists(src):
os.renames(src, os.path.join(artifacts_dir, artifact))
artifacts_moved.append(src)
logger.info(
_("archived failed task artifacts", artifacts=artifacts_moved, dest=artifacts_dir)
)
os.makedirs(os.path.join(self.host_dir, "work"))
def host_file(self, container_file: str, inputs_only: bool = False) -> Optional[str]:
"""
Map an output file's in-container path under ``container_dir`` to a host path under
``host_dir``. Return None if the designated file does not exist.
SECURITY: except for input files, this method must only return host paths under
``host_dir`` and prevent any reference to other host files (e.g. /etc/passwd), including
via sneaky symlinks
"""
if os.path.isabs(container_file):
# handle output of std{out,err}.txt
if container_file in [
os.path.join(self.container_dir, pipe_file)
for pipe_file in ["stdout.txt", "stderr.txt"]
]:
return os.path.join(self.host_dir, os.path.basename(container_file))
# handle output of an input file
if container_file in self.input_file_map_rev:
return self.input_file_map_rev[container_file]
if inputs_only:
raise Error.InputError(
"task inputs attempted to use a non-input or non-existent file "
+ container_file
)
# relativize the path to the provisioned working directory
container_file = os.path.relpath(
container_file, os.path.join(self.container_dir, "work")
)
host_workdir = os.path.join(self.host_dir, "work")
ans = os.path.join(host_workdir, container_file)
if os.path.isfile(ans):
if path_really_within(ans, host_workdir):
return ans
raise OutputError(
"task outputs attempted to use a file outside its working directory: "
+ container_file
)
return None
class TaskDockerContainer(TaskContainer):
"""
TaskContainer docker (swarm) runtime
"""
image_tag: str = "ubuntu:18.04"
"""
:type: str
docker image tag (set as desired before running)
"""
as_me: bool = False
"""
:type: bool
If so then run command inside the container using the uid:gid of the invoking user. Otherwise
don't override container user (=> it'll often run as root).
"""
_bind_input_files: Optional[str] = "ro"
_observed_states: Optional[Set[str]] = None
_id_counter: AtomicCounter = AtomicCounter()
def copy_input_files(self, logger: logging.Logger) -> None:
assert self._bind_input_files
super().copy_input_files(logger)
# now that files have been copied, it won't be necessary to bind-mount them
self._bind_input_files = None
def _run(
self,
logger: logging.Logger,
terminating: Callable[[], bool],
command: str,
cpu: int,
memory: int,
) -> int:
self._observed_states = set()
with open(os.path.join(self.host_dir, "command"), "x") as outfile:
outfile.write(command)
# prepare docker configuration
if ":" not in self.image_tag:
# seems we need to do this explicitly under some configurations -- issue #232
self.image_tag += ":latest"
logger.info(_("docker image", tag=self.image_tag))
mounts = self.prepare_mounts(logger)
# we want g+rw on files (and g+rwx on directories) under host_dir, to ensure the container
# command will be able to access them regardless of what user id it runs as (we will
# configure docker to make the container a member of the invoking user's primary group)
chmod_R_plus(self.host_dir, file_bits=0o660, dir_bits=0o770)
resources, user, groups = self.misc_config(logger, cpu, memory)
# connect to dockerd
client = docker.from_env(timeout=900)
svc = None
try:
# run container as a transient docker swarm service, letting docker handle the resource
# scheduling (waiting until requested # of CPUs are available)
svc = client.services.create(
self.image_tag,
name=f"wdl-{self.run_id}-{os.getpid()}-{TaskDockerContainer._id_counter.next()}",
command=[
"/bin/bash",
"-c",
"id; ls -Rl ..; bash ../command >> ../stdout.txt 2>> ../stderr.txt",
],
# restart_policy 'none' so that swarm runs the container just once
restart_policy=docker.types.RestartPolicy("none"),
workdir=os.path.join(self.container_dir, "work"),
mounts=mounts,
resources=resources,
user=user,
groups=groups,
labels={"miniwdl_run_id": self.run_id},
container_labels={"miniwdl_run_id": self.run_id},
)
logger.debug(_("docker service", name=svc.name, id=svc.short_id))
exit_code = None
# stream stderr into log
with PygtailLogger(logger, os.path.join(self.host_dir, "stderr.txt")) as poll_stderr:
# poll for container exit
running = False
while exit_code is None:
time.sleep(random.uniform(1.0, 2.0)) # spread out work over the GIL
if terminating():
raise Terminated() from None
if "running" in self._observed_states:
if not running:
logger.notice("container running") # pyre-fixme
running = True
poll_stderr()
exit_code = self.poll_service(logger, svc)
logger.debug(
_(
"docker service logs",
stdout=list(msg.decode().rstrip() for msg in svc.logs(stdout=True)),
stderr=list(msg.decode().rstrip() for msg in svc.logs(stderr=True)),
)
)
logger.info(_("docker exit", code=exit_code))
# retrieve and check container exit status
assert isinstance(exit_code, int)
return exit_code
finally:
if svc:
try:
svc.remove()
except:
logger.exception("failed to remove docker service")
self.chown(logger, client)
try:
client.close()
except:
logger.exception("failed to close docker-py client")
def prepare_mounts(self, logger: logging.Logger) -> List[Dict[str, str]]:
def touch_mount_point(container_file: str) -> None:
# touching each mount point ensures they'll be owned by invoking user:group
assert container_file.startswith(self.container_dir + "/")
host_file = os.path.join(
self.host_dir, os.path.relpath(container_file, self.container_dir)
)
assert host_file.startswith(self.host_dir + "/")
os.makedirs(os.path.dirname(host_file), exist_ok=True)
with open(host_file, "x") as outfile:
pass
mounts = []
# mount input files and command
if self._bind_input_files:
perm_warn = True
for host_path, container_path in self.input_file_map.items():
st = os.stat(host_path)
if perm_warn and not (
(st.st_mode & 4) or (st.st_gid == os.getegid() and (st.st_mode & 0o40))
):
# file is neither world-readable, nor group-readable for the invoking user's primary group
logger.warning(
_(
"one or more input file(s) could be inaccessible to docker images that don't run as root; it may be necessary to `chmod g+r` them, or set --copy-input-files",
example_file=host_path,
)
)
perm_warn = False
touch_mount_point(container_path)
mounts.append(f"{host_path}:{container_path}:{self._bind_input_files}")
mounts.append(
f"{os.path.join(self.host_dir, 'command')}:{os.path.join(self.container_dir, 'command')}:ro"
)
# mount stdout, stderr, and working directory read/write
for pipe_file in ["stdout.txt", "stderr.txt"]:
touch_mount_point(os.path.join(self.container_dir, pipe_file))
mounts.append(
f"{os.path.join(self.host_dir, pipe_file)}:{os.path.join(self.container_dir, pipe_file)}:rw"
)
mounts.append(
f"{os.path.join(self.host_dir, 'work')}:{os.path.join(self.container_dir, 'work')}:rw"
)
logger.debug(_("docker mounts", mounts=mounts))
return mounts
def misc_config(
self, logger: logging.Logger, cpu: int, memory: int
) -> Tuple[Optional[Dict[str, str]], Optional[str], List[str]]:
resources = {}
if cpu:
# the cpu unit expected by swarm is "NanoCPUs"
resources["cpu_limit"] = cpu * 1_000_000_000
resources["cpu_reservation"] = cpu * 1_000_000_000
if memory:
resources["mem_reservation"] = memory
if resources:
logger.debug(_("docker resources", **resources))
resources = docker.types.Resources(**resources)
else:
resources = None
user = None
if self.as_me:
user = f"{os.geteuid()}:{os.getegid()}"
logger.info(_("docker user", uid_gid=user))
if os.geteuid() == 0:
logger.warning(
"container command will run explicitly as root, since you are root and set --as-me"
)
# add invoking user's group to ensure that command can access the mounted working
# directory even if the docker image assumes some arbitrary uid
groups = [str(os.getegid())]
if groups == ["0"]:
logger.warning(
"container command will run as a root/wheel group member, since this is your primary group (gid=0)"
)
return resources, user, groups
def poll_service(
self, logger: logging.Logger, svc: docker.models.services.Service
) -> Optional[int]:
status = {"State": "(UNKNOWN)"}
svc.reload()
assert svc.attrs["Spec"]["Labels"]["miniwdl_run_id"] == self.run_id
tasks = svc.tasks()
if tasks:
assert len(tasks) == 1, "docker service should have at most 1 task"
status = tasks[0]["Status"]
logger.debug(_("docker task", id=tasks[0]["ID"], status=status))
else:
assert (
len(self._observed_states or []) <= 1
), "docker task shouldn't disappear from service"
# log each new state
assert isinstance(self._observed_states, set)
if status["State"] not in self._observed_states:
logger.info(_("docker task transition", state=status["State"]))
self._observed_states.add(status["State"])
# https://docs.docker.com/engine/swarm/how-swarm-mode-works/swarm-task-states/
# https://github.com/moby/moby/blob/8fbf2598f58fb212230e6ddbcfbde628b0458250/api/types/swarm/task.go#L12
if "ExitCode" in status.get("ContainerStatus", {}):
exit_code = status["ContainerStatus"]["ExitCode"]
assert isinstance(exit_code, int)
if exit_code != 0 or status["State"] == "complete":
logger.info(_("docker task exit", state=status["State"], exit_code=exit_code))
return exit_code
if status["State"] in ["failed", "rejected", "orphaned", "remove"]:
raise RuntimeError(
f"docker task {status['State']}"
+ ((": " + status["Err"]) if "Err" in status else "")
)
return None
def chown(self, logger: logging.Logger, client: docker.DockerClient) -> None:
"""
After task completion, chown all files in the working directory to the invoking user:group,
instead of leaving them frequently owned by root or some other arbitrary user id (image-
dependent). We do this in a funny way via Docker; see GitHub issue #271 for discussion of
alternatives and their problems.
"""
if not self.as_me and (os.geteuid() or os.getegid()):
t_0 = time.monotonic()
script = f"""
chown -RP {os.geteuid()}:{os.getegid()} {shlex.quote(os.path.join(self.container_dir, 'work'))}
""".strip()
volumes = {self.host_dir: {"bind": self.container_dir, "mode": "rw"}}
logger.debug(_("post-task chown", script=script, volumes=volumes))
try:
chowner = None
try:
chowner = client.containers.run(
"alpine:3",
name=f"wdl-{self.run_id}-chown-{os.getpid()}-{TaskDockerContainer._id_counter.next()}",
command=["/bin/ash", "-c", script],
volumes=volumes,
detach=True,
)
chowner_status = chowner.wait()
assert (
isinstance(chowner_status, dict)
and chowner_status.get("StatusCode", -1) == 0
), str(chowner_status)
finally:
if chowner:
chowner.remove()
except:
logger.exception("post-task chown failed")
finally:
t_delta = time.monotonic() - t_0
if t_delta >= 60:
logger.warning(
_(
"post-task chown was slow (may indicate excessive file count and/or IOPS exhaustion)",
seconds=int(t_delta),
)
)
def run_local_task(
task: Tree.Task,
inputs: Env.Bindings[Value.Base],
run_id: Optional[str] = None,
run_dir: Optional[str] = None,
copy_input_files: bool = False,
runtime_defaults: Optional[Dict[str, Union[str, int]]] = None,
runtime_cpu_max: Optional[int] = None,
runtime_memory_max: Optional[int] = None,
logger_prefix: Optional[List[str]] = None,
as_me: bool = False,
) -> Tuple[str, Env.Bindings[Value.Base]]:
"""
Run a task locally.
Inputs shall have been typechecked already. File inputs are presumed to be local POSIX file
paths that can be mounted into a container.
:param run_id: unique ID for the run, defaults to workflow name
:param run_dir: directory under which to create a timestamp-named subdirectory for this run
(defaults to current working directory).
If the final path component is ".", then operate in run_dir directly.
:param copy_input_files: copy input files and mount them read/write instead of read-only
:param runtime_defaults: default values for runtime settings
:param runtime_cpu_max: maximum effective runtime.cpu value (default: # host CPUs)
:param runtime_memory_max: maximum effective runtime.memory value in bytes (default: total host
memory)
:param as_me: run container command using the current user uid:gid (may break commands that
assume root access, e.g. apt-get)
"""
# provision run directory and log file
run_id = run_id or task.name
run_dir = provision_run_dir(task.name, run_dir)
write_values_json(inputs, os.path.join(run_dir, "inputs.json"))
logger_prefix = (logger_prefix or ["wdl"]) + ["t:" + run_id]
logger = logging.getLogger(".".join(logger_prefix))
with LoggingFileHandler(logger, os.path.join(run_dir, "task.log")) as fh:
fh.setFormatter(logging.Formatter(LOGGING_FORMAT))
logger.notice( # pyre-fixme
_(
"task start",
name=task.name,
source=task.pos.uri,
line=task.pos.line,
column=task.pos.column,
dir=run_dir,
)
)
logger.info(_("thread", ident=threading.get_ident()))
try:
# download input files, if needed
posix_inputs = _download_input_files(logger, logger_prefix, run_dir, inputs)
# create appropriate TaskContainer
container = TaskDockerContainer(run_id, run_dir)
# evaluate input/postinput declarations, including mapping from host to
# in-container file paths
container_env = _eval_task_inputs(logger, task, posix_inputs, container)
# evaluate runtime fields
runtime = _eval_task_runtime(
logger, task, container_env, runtime_defaults, runtime_cpu_max, runtime_memory_max
)
container.image_tag = str(runtime.get("docker", container.image_tag))
container.as_me = as_me
# interpolate command
command = _util.strip_leading_whitespace(
task.command.eval(container_env, stdlib=InputStdLib(logger, container)).value
)[1]
logger.debug(_("command", command=command.strip()))
# start container & run command (and retry if needed)
_try_task(logger, container, command, runtime, copy_input_files)
# evaluate output declarations
outputs = _eval_task_outputs(logger, task, container_env, container)
# write and link outputs
from .. import values_to_json
outputs = link_outputs(outputs, run_dir)
write_values_json(outputs, os.path.join(run_dir, "outputs.json"), namespace=task.name)
# make sure everything will be accessible to downstream tasks
chmod_R_plus(container.host_dir, file_bits=0o660, dir_bits=0o770)
logger.notice("done") # pyre-fixme
return (run_dir, outputs)
except Exception as exn:
logger.debug(traceback.format_exc())
wrapper = RunFailed(task, run_id, run_dir)
info = {"error": exn.__class__.__name__}
if str(exn):
info["message"] = str(exn)
if hasattr(exn, "job_id"):
info["node"] = getattr(exn, "job_id")
logger.error(_(str(wrapper), **info))
raise wrapper from exn
def _download_input_files(
logger: logging.Logger, logger_prefix: List[str], run_dir: str, inputs: Env.Bindings[Value.Base]
) -> Env.Bindings[Value.Base]:
"""
Find all File values in the inputs (including any nested within compound values) that need
to / can be downloaded. Download them to some location under run_dir and return a copy of the
inputs with the URI values replaced by the downloaded filenames.
"""
downloads = 0
total_bytes = 0
def map_files(v: Value.Base) -> Value.Base:
nonlocal downloads, total_bytes
if isinstance(v, Value.File):
if downloadable(v.value):
logger.info(_("download input file", uri=v.value))
v.value = download(
v.value,
run_dir=os.path.join(run_dir, "download", str(downloads), "."),
logger_prefix=logger_prefix + [f"download{downloads}"],
)
sz = os.path.getsize(v.value)
logger.info(_("downloaded input file", uri=v.value, file=v.value, bytes=sz))
downloads += 1
total_bytes += sz
for ch in v.children:
map_files(ch)
return v
ans = inputs.map(
lambda binding: Env.Binding(binding.name, map_files(copy.deepcopy(binding.value)))
)
if downloads:
logger.notice( # pyre-fixme
_("downloaded input files", count=downloads, total_bytes=total_bytes)
)
return ans
def _eval_task_inputs(
logger: logging.Logger,
task: Tree.Task,
posix_inputs: Env.Bindings[Value.Base],
container: TaskContainer,
) -> Env.Bindings[Value.Base]:
# Map all the provided input Files to in-container paths
container.add_files(_filenames(posix_inputs))
# copy posix_inputs with all Files mapped to their in-container paths
def map_files(v: Value.Base) -> Value.Base:
if isinstance(v, Value.File):
v.value = container.input_file_map[v.value]
for ch in v.children:
map_files(ch)
return v
container_inputs = posix_inputs.map(
lambda binding: Env.Binding(binding.name, map_files(copy.deepcopy(binding.value)))
)
# initialize value environment with the inputs
container_env = Env.Bindings()
for b in container_inputs:
assert isinstance(b, Env.Binding)
v = b.value
assert isinstance(v, Value.Base)
container_env = container_env.bind(b.name, v)
vj = json.dumps(v.json)
logger.info(_("input", name=b.name, value=(v.json if len(vj) < 4096 else "(((large)))")))
# collect remaining declarations requiring evaluation.
decls_to_eval = []
for decl in (task.inputs or []) + (task.postinputs or []):
if not container_env.has_binding(decl.name):
decls_to_eval.append(decl)
# topsort them according to internal dependencies. prior static validation
# should have ensured they're acyclic.
decls_by_id, decls_adj = Tree._decl_dependency_matrix(decls_to_eval)
decls_to_eval = [decls_by_id[did] for did in _util.topsort(decls_adj)]
assert len(decls_by_id) == len(decls_to_eval)
# evaluate each declaration in that order
# note: the write_* functions call container.add_files as a side-effect
stdlib = InputStdLib(logger, container)
for decl in decls_to_eval:
assert isinstance(decl, Tree.Decl)
v = Value.Null()
if decl.expr:
try:
v = decl.expr.eval(container_env, stdlib=stdlib).coerce(decl.type)
except Error.RuntimeError as exn:
setattr(exn, "job_id", decl.workflow_node_id)
raise exn
except Exception as exn:
exn2 = Error.EvalError(decl, str(exn))
setattr(exn2, "job_id", decl.workflow_node_id)
raise exn2 from exn
else:
assert decl.type.optional
vj = json.dumps(v.json)
logger.info(_("eval", name=decl.name, value=(v.json if len(vj) < 4096 else "(((large)))")))
container_env = container_env.bind(decl.name, v)
return container_env
def _filenames(env: Env.Bindings[Value.Base]) -> Set[str]:
"Get the filenames of all File values in the environment"
ans = set()
def collector(v: Value.Base) -> None:
if isinstance(v, Value.File):
ans.add(v.value)
for ch in v.children:
collector(ch)
for b in env:
collector(b.value)
return ans
_host_memory: Optional[int] = None
def _eval_task_runtime(
logger: logging.Logger,
task: Tree.Task,
env: Env.Bindings[Value.Base],
runtime_defaults: Optional[Dict[str, Union[str, int]]],
runtime_cpu_max: Optional[int],
runtime_memory_max: Optional[int],
) -> Dict[str, Union[int, str]]:
global _host_memory
runtime_values = {}
if runtime_defaults:
for key, v in runtime_defaults.items():
if isinstance(v, str):
runtime_values[key] = Value.String(v)
elif isinstance(v, int):
runtime_values[key] = Value.Int(v)
else:
raise Error.InputError(f"invalid default runtime setting {key} = {v}")
for key, expr in task.runtime.items():
runtime_values[key] = expr.eval(env)
logger.debug(_("runtime values", **dict((key, str(v)) for key, v in runtime_values.items())))
ans = {}
if "docker" in runtime_values:
ans["docker"] = runtime_values["docker"].coerce(Type.String()).value
if "cpu" in runtime_values:
cpu_value = runtime_values["cpu"].coerce(Type.Int()).value
assert isinstance(cpu_value, int)
cpu = max(1, min(runtime_cpu_max or multiprocessing.cpu_count(), cpu_value))
if cpu != cpu_value:
logger.warning(
_("runtime.cpu adjusted to local limit", original=cpu_value, adjusted=cpu)
)
ans["cpu"] = cpu
if "memory" in runtime_values:
memory_str = runtime_values["memory"].coerce(Type.String()).value
assert isinstance(memory_str, str)
try:
memory_bytes = parse_byte_size(memory_str)
except ValueError:
raise Error.EvalError(
task.runtime["memory"], "invalid setting of runtime.memory, " + memory_str
)
if not runtime_memory_max:
_host_memory = _host_memory or psutil.virtual_memory().total
runtime_memory_max = _host_memory
assert isinstance(runtime_memory_max, int)
if memory_bytes > runtime_memory_max:
logger.warning(
_(
"runtime.memory adjusted to local limit",
original=memory_bytes,
adjusted=runtime_memory_max,
)
)
memory_bytes = runtime_memory_max
ans["memory"] = memory_bytes
if "maxRetries" in runtime_values:
ans["maxRetries"] = max(0, runtime_values["maxRetries"].coerce(Type.Int()).value)
if ans:
logger.info(_("effective runtime", **ans))
unused_keys = list(key for key in runtime_values if key not in ans)
if unused_keys:
logger.warning(_("ignored runtime settings", keys=unused_keys))
return ans
def _try_task(
logger: logging.Logger,
container: TaskContainer,
command: str,
runtime: Dict[str, Union[int, str]],
copy_input_files: bool,
) -> None:
"""
Run the task command in the container, with up to runtime.maxRetries
"""
maxRetries = runtime.get("maxRetries", 0)
prevRetries = 0
while True:
# copy input files, if needed
if copy_input_files:
container.copy_input_files(logger)
try:
# start container & run command
return container.run(
logger, command, int(runtime.get("cpu", 0)), int(runtime.get("memory", 0))
)
except Exception as exn:
if isinstance(exn, Terminated) or prevRetries >= maxRetries:
raise
logger.error(
_(
"task failure will be retried",
error=exn.__class__.__name__,
message=str(exn),
prevRetries=prevRetries,
maxRetries=maxRetries,
)
)
container.reset(logger, prevRetries)
prevRetries += 1
def _eval_task_outputs(
logger: logging.Logger, task: Tree.Task, env: Env.Bindings[Value.Base], container: TaskContainer
) -> Env.Bindings[Value.Base]:
# helper to rewrite Files from in-container paths to host paths
def rewrite_files(v: Value.Base, output_name: str) -> None:
if isinstance(v, Value.File):
host_file = container.host_file(v.value)
if host_file is None:
logger.warning(
_(
"output file not found in container (error unless declared type is optional)",
name=output_name,
file=v.value,
)
)
else:
logger.debug(_("output file", container=v.value, host=host_file))
# We may overwrite File.value with None, which is an invalid state, then we'll fix it
# up (or abort) below. This trickery is because we don't, at this point, know whether
# the 'desired' output type is File or File?.
v.value = host_file
for ch in v.children:
rewrite_files(ch, output_name)
stdlib = OutputStdLib(logger, container)
outputs = Env.Bindings()
for decl in task.outputs:
assert decl.expr
try:
v = decl.expr.eval(env, stdlib=stdlib).coerce(decl.type)
except Error.RuntimeError as exn:
setattr(exn, "job_id", decl.workflow_node_id)
raise exn
except Exception as exn:
exn2 = Error.EvalError(decl, str(exn))
setattr(exn2, "job_id", decl.workflow_node_id)
raise exn2 from exn
vj = json.dumps(v.json)
logger.info(
_("output", name=decl.name, value=(v.json if len(vj) < 4096 else "(((large)))"))
)
# Now, a delicate sequence for postprocessing File outputs (including Files nested within
# compound values)
# First bind the value as-is in the environment, so that subsequent output expressions will
# "see" the in-container path(s) if they use this binding. (Copy it though, because we'll
# then clobber v)
env = env.bind(decl.name, copy.deepcopy(v))
# Rewrite each File.value to either a host path, or None if the file doesn't exist.
rewrite_files(v, decl.name)
# File.coerce has a special behavior for us so that, if the value is None:
# - produces Value.Null() if the desired type is File?
# - raises FileNotFoundError otherwise.
try:
v = v.coerce(decl.type)
except FileNotFoundError:
exn = OutputError("File not found in task output " + decl.name)
setattr(exn, "job_id", decl.workflow_node_id)
raise exn
outputs = outputs.bind(decl.name, v)
return outputs
def link_outputs(outputs: Env.Bindings[Value.Base], run_dir: str) -> Env.Bindings[Value.Base]:
"""
Following a successful run, the output files may be scattered throughout a complex directory
tree used for execution. To help navigating this, generate a subdirectory of the run directory
containing nicely organized symlinks to the output files, and rewrite File values in the
outputs env to use these symlinks.
"""
def map_files(v: Value.Base, dn: str) -> Value.Base:
if isinstance(v, Value.File):
hardlink = os.path.realpath(v.value)
assert os.path.isfile(hardlink)
symlink = os.path.join(dn, os.path.basename(v.value))
os.makedirs(dn, exist_ok=False)
os.symlink(hardlink, symlink)
v.value = symlink
# recurse into compound values
elif isinstance(v, Value.Array) and v.value:
d = int(math.ceil(math.log10(len(v.value)))) # how many digits needed
for i in range(len(v.value)):
v.value[i] = map_files(v.value[i], os.path.join(dn, str(i).rjust(d, "0")))
elif isinstance(v, Value.Map):
# create a subdirectory for each key, as long as the key names seem to make reasonable
# path components; otherwise, treat the dict as a list of its values
keys_ok = (
sum(
1
for b in v.value
if re.fullmatch("[-_a-zA-Z0-9][-_a-zA-Z0-9.]*", str(b[0])) is None
)
== 0
)
d = int(math.ceil(math.log10(len(v.value))))
for i, b in enumerate(v.value):
v.value[i] = (
b[0],
map_files(
b[1], os.path.join(dn, str(b[0]) if keys_ok else str(i).rjust(d, "0"))
),
)
elif isinstance(v, Value.Pair):
v.value = (
map_files(v.value[0], os.path.join(dn, "left")),
map_files(v.value[1], os.path.join(dn, "right")),
)
elif isinstance(v, Value.Struct):
for key in v.value:
v.value[key] = map_files(v.value[key], os.path.join(dn, key))
return v
return outputs.map(
lambda binding: Env.Binding(
binding.name,
map_files(
copy.deepcopy(binding.value), os.path.join(run_dir, "output_links", binding.name)
),
)
)
class _StdLib(StdLib.Base):
logger: logging.Logger
container: TaskContainer
inputs_only: bool # if True then only permit access to input files
def __init__(self, logger: logging.Logger, container: TaskContainer, inputs_only: bool) -> None:
super().__init__(write_dir=os.path.join(container.host_dir, "write_"))
self.logger = logger
self.container = container
self.inputs_only = inputs_only
def _devirtualize_filename(self, filename: str) -> str:
# check allowability of reading this file, & map from in-container to host
ans = self.container.host_file(filename, inputs_only=self.inputs_only)
if ans is None:
raise OutputError("function was passed non-existent file " + filename)
self.logger.debug(_("read_", container=filename, host=ans))
return ans
def _virtualize_filename(self, filename: str) -> str:
# register new file with container input_file_map
self.container.add_files([filename])
self.logger.debug(
_("write_", host=filename, container=self.container.input_file_map[filename])
)
self.logger.info(_("wrote", file=self.container.input_file_map[filename]))
return self.container.input_file_map[filename]
class InputStdLib(_StdLib):
# StdLib for evaluation of task inputs and command
def __init__(self, logger: logging.Logger, container: TaskContainer) -> None:
super().__init__(logger, container, True)
class OutputStdLib(_StdLib):
# StdLib for evaluation of task outputs
def __init__(self, logger: logging.Logger, container: TaskContainer) -> None:
super().__init__(logger, container, False)
setattr(
self,
"stdout",
StdLib.StaticFunction(
"stdout",
[],
Type.File(),
lambda: Value.File(os.path.join(self.container.container_dir, "stdout.txt")),
),
)
setattr(
self,
"stderr",
StdLib.StaticFunction(
"stderr",
[],
Type.File(),
lambda: Value.File(os.path.join(self.container.container_dir, "stderr.txt")),
),
)
def _glob(pattern: Value.String, lib: OutputStdLib = self) -> Value.Array:
pat = pattern.coerce(Type.String()).value
if not pat:
raise OutputError("empty glob() pattern")
assert isinstance(pat, str)
if pat[0] == "/":
raise OutputError("glob() pattern must be relative to task working directory")
if pat.startswith("..") or "/.." in pat:
raise OutputError("glob() pattern must not use .. uplevels")
if pat.startswith("./"):
pat = pat[2:]
# glob the host directory
pat = os.path.join(lib.container.host_dir, "work", pat)
host_files = sorted(fn for fn in glob.glob(pat) if os.path.isfile(fn))
# convert the host filenames to in-container filenames
container_files = []
for hf in host_files:
dstrip = lib.container.host_dir
dstrip += "" if dstrip.endswith("/") else "/"
assert hf.startswith(dstrip)
container_files.append(os.path.join(lib.container.container_dir, hf[len(dstrip) :]))
return Value.Array(Type.File(), [Value.File(fn) for fn in container_files])
setattr(
self,
"glob",
StdLib.StaticFunction("glob", [Type.String()], Type.Array(Type.File()), _glob),
)
|
Advancing a Sustainable Career Model for Political Science Students: Implications for Career Development Research and Practice This paper aims to assist lecturers, universities, and their administrators in improving the relevance of political science undergraduate degree programs in the context of globalization and the Fourth Industrial Revolution era. This paper will reflect on how to tailor the political science degree to achieve a sustainable career and improve students' employability in the future. The latest theoretical frameworks incorporating the concept of "sustainable" career development were used in advancing the model of employability in the political science field. The author relies on a qualitative approach and the literature review with implications for practice in advancing the notion that competency-based approaches with the development of specific skills are vital in ensuring relevance and sustaining career opportunities for modern political science students in the future. Educators should rethink how they deliver political science degrees, keeping in mind the emerging trends in technology, pedagogical approaches, and HR practices in the respective job markets. This paper offers insight into how to tailor an exciting political science program for the future of work. Introduction Higher education is changing. More and more new methods are being used to incorporate students learning styles and modern technologies (Ahmad, 2018b(Ahmad,, 2019a(Ahmad,, 2020a. The change is happening across various disciplines (in teaching ethics, law, family business, CSR) and engaging millennials (Ahmad, 2018a(Ahmad,, 2019b(Ahmad,, 2020b(Ahmad,, 2020d. Scholars are reimagining the future of higher education, proposing new student support models (Ahmad, 2020g, 2021. In this scheme of things, in my viewpoint, political science is a very important subject. However, the political science curriculum in a regular university is predominantly theoretical, emphasizing political thought, history, and philosophy, where less emphasis is placed on practical skills (Ahmad, 2020c). Due to the lack of practicability and skills, many political science majors question their level of employability. Therefore, the universities must prioritize teaching students the necessary skills they will need for the future of work and Industry 4.0. One recommendation is that lecturers should seek to create scenarios and test students' practical skills of problem-solving and critical thinking as opposed to testing their memory. Others contend that we need to rethink outdated models of career development and advance more modern approaches and theories of sustainable career development to ensure employability in the context of an increasingly globalized and technology-driven society. Furthermore, competency-based approaches provide useful mechanisms for university institutions, employers, and undergraduate students to measure and assess the relevance of political science degrees and the ability to access and sustain various career paths over a lifetime. In this paper, the author conducts a review of the literature pertaining to the relevance of degree programs in preparing students for the future of work in the political science discipline. An author examines, in detail, early and more recent models of career development and evaluates their relevance in the present work context. The latest theoretical frameworks incorporating the concept of "sustainable" career development are also analysed and used in advancing the model of employability in this field. The author relies on a qualitative approach to the literature review with implications for practice in advancing the notion that competency-based approaches with the development of specific skills are vital in ensuring relevance and sustaining career opportunities for modern political science students into the future. The enhancement of special competencies such as advanced analytical, strategic, critical thinking, and social skills are recognized as increasingly important for students to attain. Research and appraisal competencies, especially related to public policy and decision-making, are also in demand. Universities are also experimenting with simulation and scenario-based exercises to provide practical-based work experiences and enhance active learning pedagogies in political science teaching and learning methods. This paper examines how some of the top universities in the world and work organizations, through their career development and HR initiatives, are applying such approaches in preparing students and young graduates for sustainable employability in their chosen political science field. The author hopes that the findings will provide some direction for crafting best practices and recommendations which can be implemented to prepare students for the future. The Relevance of Political Science Degree Programs Many graduates and parents of graduates of political science programs are incognizant of the relevance of this degree within the place of work. This ignorance has landed many students in an unpropitious position as they lack the 'know-how' to apply themselves to the job market. With little doubt, many past and upcoming graduates of political science are baffled with the questions of 'What life will be like after university?', 'What is the next step?', 'Where do I go from here?' Currently, several student-university discourses have been taking place on how these tertiary institutions can advance preparing students of political science for the world of work. Educating students about history, philosophy, and government systems are essential for their development as mature human beings. This is integral for establishing a good citizenry, but more essential to this is the idea of developing a productive workforce. Early perspectives on political science discipline contend that little change has occurred in the curriculum over the past century. For instance, the American Political Science Association asserts that while there has been some shift in focus away from knowledge and information gathering towards the attainment of skills, little attention has been paid to the overall structure of political science programs in terms of exposing students about the process of government and political systems. "Structural and attitudinal impediments" such as cultural and incentive-based factors, the lack of supporting institutional framework to implement, promote and sustain new practice-oriented teaching methods are seen as influencing factors and a significant change in the political science curriculum (). Others express a more radical view that political science as a discipline has witnessed a serious decline in rigorous scholarly engagement in the current neoliberal setting. Such an environment fosters or facilitates the "rise of careerism" with too much focus in higher education structure on career and personal pursuits at the expense of larger public outreach and social obligations. Higher education institutions may want to rethink their approach to teaching political science within their larger obligations to society. The delivery of an education which focusses on an active engagement in pressing political and social issues, the pursuit of rigorous research agendas, and applying sound theoretical and methodological principles to advance the causes of democracy and society is seen as preferable to a "fixation on prestige, ranking and careerism". However, is this a realistic perspective with respect to the role of political science in the 21 st century? A more recent review of the state of political science in universities in the current context and implications for career development and future work prospects demonstrate the range of complex issues grappling the discipline. Conducted by the American Political Science Association, research illustrates the challenges of balancing the needs of providing students with competitive in-demand degree programs while preparing them to fulfill their obligations in addressing wider societal, civic, and international concerns. In a 21 st -century context, changing demographics, diversity, and inclusion issues can impact the teaching and learning quality and, by extension, the perceived effectiveness and relevance of current political science programs (American Political Science Association, 2011). As a result, in evaluating these issues, it is important to probe how existing curriculum programs and supporting capacity-building frameworks may be modified and enriched to make them more relevant while at the same time increasing student outcomes. For instance, the American Political Science Association review supported by data-driven statistics indicated that in terms of enrolment and demographics in the US, Latinos led with the highest concentration of students pursuing undergraduate studies majoring in political science (45%) followed by African American (39%) and then Whites (38%) and with more women at just over 57% in 2009. With respect to diversity in discussions and assignments, it was found that student feedback on experiences of the various political science programs was adequate. Generally, high levels of accommodation of diverse views and perspectives, application of theory to practical problems, and the application of policy-oriented courses seem to make it relevant to concerns of a growing diverse student population (American Political Science Association, 2011). Such findings have profound implications for the future direction of teaching and learning pedagogies and, more importantly, the incorporation of relevance and inclusiveness into political science programs. The report outlines a number of interesting recommendations on how best to modify the current curriculum to enable undergraduates to obtain an enhanced perception of inclusiveness and relevance of their political science studies in the 21 st -century setting. These revolve around as increasing the range and variety of teaching methods and techniques, reorganizing and restructuring the syllabus to unlearn outdated concepts, relearn, learn and test new concepts which test and support the diversity and inclusiveness model; and internationalizing the curriculum in terms of integrating new methodologies, technologies, learning materials, and resources to modernize and enable it to meet global standards. More importantly, higher education institutions (HEIs) will have to address human resource concerns in terms of leadership development, hiring and retention, the provision of mentoring initiatives for its increasingly diverse graduate student population. Finally, in terms of a 21 st -century capacitybuilding framework, there needs to be a greater push at collaboration and partnership with external bodies to obtain funding to drive the mandate to develop innovative teaching frameworks and models for political science departments in order to meet the challenges and embrace the opportunities of the discipline in the future (American Political Science Association, 2011). However, there can be no doubt that more current literature research focuses the study on the utility of a political science degree, on its relevance in the workplace setting, and the impact it will have on students' career development path throughout a lifetime. In one study conducted in Canada on the impact of the degree in the workplace setting in the non-governmental organization (NGO) sector, it was found that while still useful, desirable, and in demand, many Canadian employers felt that students lacked the right skill sets appropriate for work. Statistics and data compiled from the study indicated skill deficiencies, most notably competencies and attributes such as flexibility, adaptability, planning, time management, critical thinking, and analysis. From the graduates' perspective, those who worked in an NGO sector also perceived that their political science education did not contribute significantly to enhance their workplace skills and called for a deliberate shift in focus on the structure of the curriculum to integrate these skills. Another study indicated that of those considering pursuing a legal career, around 43% of respondents recommend a liberal arts, political science degree as highly appropriate and relevant, as it gives a solid knowledge base in social sciences, exposure to cultural and diverse social issues along with skills training opportunities in critical thinking, communication, and creativity, along with studies about current political processes. Others study the relevance of the degree program from the perspective of its modern-day appeal and student motivation for enrolment. One recent survey tries to probe students' motivations and perceptions about enrolment. Are they signing up primarily for developing valuable skills or gaining a practical understanding of how the real world functions? The quantitativebased study revealed that students prefer enrolling in political science courses to gain a better practical understanding of how the real-world functions. This took precedence over opportunities to develop skills. Although skills are important, the study emphasizes that students were attracted to the courses not merely for skills attainment learning technical and science-related disciplines geared towards specific career paths. Rather, courses, if structured as "generalist" with the objective of providing students with general competencies such as adaptability and employability, were of far greater relevance for coping in current uncertain working environments. However, others counter this position by advancing the view that political science skills will gain increased relevance from an economic standpoint in a globalized and interconnected world. While conceding that many university students embark on studying the discipline in ascertaining how the world works, the processes and structures of political systems to contribute to society provide a closer study on how industry and companies can derive value from the skills of political science graduates. In fact, there is a high place for a political scientist to contribute in significant ways to the economy. Specific examples in a business, economic, and work context include ability to create or change new rules and regulations, present varying views and perspectives, such as scenario planning, and partake in the final decisionmaking process; and making offer/counter offer, reconciliation, negotiate deals and on arbitration matters. Since numerous job opportunities exist in varying professions in the corporate business sector, legal, trade union movement, NGO's and government, it is becoming increasingly evident that the creation of modern models of career development will be integral to the growth of sustained employment and increased societal relevance, via skills and competencies, self-development, personal and professional opportunities for political science students over their career life (). Advancing Modern Theories of "Sustainable" Career Development Most recent literature on career development frameworks has seen the emergence of the notion of "sustainable" careers given the present context of technology, globalization, etc. (De ). Much of the literature seems to focus on the need to provide students and young graduates entering the workforce with career competencies in order to future-proof their careers and better guarantee employability and career fulfillment over a lifetime. This model utilizes a systematic and dynamic approach in investigating the factors which will influence or impact the sustainability of career design and development in multifaceted and evolving work circumstances. It asserts that the three key components of "person, time and context" are crucial elements in ensuring sustainable careers, with "happiness, health and productivity" being important indicators of sustainability (De ). The crucial mechanism for this conceptual model is the application of systematic perspectives to the dynamic interplay or interaction of these elements to create a basis for sustainable careers. For instance, "career shocks," defined as unexpected career events, can be used to study the impact of the interaction of these three dimensions. Contextual factors are especially important in studying the effect of sustainability and for future research and planning in coping and managing career transitions. Secondly, evolving categories of work and employment arrangements can also impact the dimensions. Factors such as working groups, industry type, age grouping, demographics, diversity issues, inclusiveness, and work environment all affect the context in which careers evolve over time. Thirdly, changes relating to age, psycho-social, values, and societal perspectives will change over time and impact sustainability over a person's long career life span. Therefore, HR practitioners and HE institutions must adopt future-oriented research, planning, and design approaches to enable careers to become sustainable over time. This model outlines specific recommendations on how individuals and institutions can adapt and cope with events or changes which affect career goals: 1. The use of a research model and analytical tools. Applying longitudinal research and time-sensitive analytical models will help understand cycles of adaptation and build more robust career sustainable frameworks in a dynamic and evolving environment. 2. Most importantly, prospective, reflective, and retrospective studies are critical in gaining insights into potential pitfalls or causal factors why nonsustainable careers may develop over time. This will enable the better design of future career development initiatives using qualitative research methods to "future-proof" careers in a dynamic and evolving work environment. Even more recent literature research investigates the role which career competencies, success, and shocks have in determining long-term career employability and sustainability. For instance, Blokker et al.'s research advances and builds on the above model of sustainable careers by emphasizing the importance of moderating or mediating factors. It is assumed that higher career competencies lead to greater career success and employability, but little is known about the impact of "career shocks." The essential takeaway from the authors' study is that in line with career constructionist theory (CCT), it is important to distinguish between different types of success (subjective/objective/perceived) and employability (internal/external) and career shocks (positive/negative) as they all impact on the interactions in this model of sustainability. Applying Elements of Competencies, Success, and Shocks in "Sustainable" Career Development Model In terms of theory, an early element of CCT is defined as the role of competencies in obtaining success and employability. It largely entails a process of designing and building a career, utilizing resources to cope with demands, challenges, and opportunities of career life. In addition, there is continuous adaptation, integration, and development to navigate existing work circumstances to maintain and sustain long-run employability. With regards to elements of career competencies, this involves all those "knowledge, skills and abilities" essential to career development, but enhanced by the individual incorporating a wide range of "reflective, behavioural and communicative" skills to assist, guide, and motivate one's career pursuits (). Other related activities such as the application of adaptive behaviours, vocational abilities, techniques in adding value to their organization, and acquiring a positive perception of their internal and external employability are considered in the range of competencies. Elements of career success are defined as all those accomplishments which result from work activities over the long term. However, we need to distinguish between "perceived, subjected or self-evaluated success and objective career success," which are measurably verifiable attained success (). According to the theoretical framework of CCT, those with high levels of competence will be perceived "as more employable and enjoying higher career success." Career shocks are defined as unexpected, infrequent, extraordinary events that can positively or negatively impact an individual's career path, goal, objective, and development. In accordance with CCT, it can provide the impetus for young career individuals to reassess, re-evaluate, and revise their career development process in terms of requirements to improve or enhance their career objectives. Some argue that positive shocks tend to motivate, inspire and create confidence in realizing preferred career goals. In contrast, negative shocks tend to severely hinder and undermine the career decision-making process and overall development process. The framework asserts that career shocks are an important mediating factor in career success and employability. Career shocks can severely impact, so it is necessary to provide young with vital coping strategies, incorporating other soft skills such as flexibility, adaptability, resilience, lifelong and counseling training programs in order to navigate the current uncertain career environment. In terms of practical implications -the incorporation of competencies, success, and shock factors into a model of career sustainability provides unique, insightful information for career development and HR professionals in generating workable approaches to success and employability over time. Competency-Based Approaches in Facilitating Sustainable Careers for Political Science Students Therefore, the question arises: To what extent can lecturers, university administrators, and human resource professionals facilitate the training and development needs of political science students to advance their career development prospects and ensure sustainable employment opportunities? This paper suggests the notion that competency-based approaches with the development of specific targeted skills will be crucial in ensuring the relevance and sustained employability for modern political science students in the future. The author will examine what these competencies are and how they are being tested and integrated into the curriculum in select university institutions across the globe. There is an abundance of current literature replete with recommendations, strategies, and blueprints on how best to implement novel career management initiatives in the current work environment. Some propose approaches to emphasize diversity (Mershon & Walsh, 2016), self-management (Wilhelm & Hirschi, 2019), and work-integrated methodologies (Jackson & Wilton, 2019); others suggest the redesigning and customizing workloads integrated with organizational management involvement in sharing responsibilities as the solution to future job security (Kossek & Ollier-Malaterre, 2019). Furthermore, others see the need for learning and work institutions to focus on formalized institution led training and development to improve networking, career placement, and mentoring opportunities as a direct pathway for upward career mobility (;Likov & Tomk, 2013). This author, however, proposes to hone in on the application of innovative competency-based approaches being deployed in institutions to ensure more sustained employment for graduates. This author speaks specifically to the integration of technology and newer pedagogical methods to increase the acquisition of in-demand workplace skills in the 21 st -century work setting. The world has begun to witness the emergence of technology-driven simulation, and scenario-based instructional delivery, e-learning, blended active learning programs encroaching on political science degree programs. Also, there is an emphasis on training in specific competencies relating to the development of "political skills" research and appraisal methodologies in particular to meet the demand for public policymaking, decision making, and highly developed analytical, cognitive, social, and networking capabilities to fill growing high demand job opportunities in private corporations, international relations and diplomacy fields (). Public Policy Decision Making Skills There is the view that political science degrees need to be given more focus in the context of the changing job market and the emphasis on applied degrees. There is currently too much reliance on voluntary internships for career development. It is recommended that institutions design specific career models to meet the current job market, which emphasize a combination of marketable competencies such as empirical research methods and statistical analysis in addition to training and instruction in career building techniques comprising interviewing, networking, candidate portfolio management, and mentoring (). One of the skills identified as lacking in the work world connected to political science and public administration discipline is critical appraisal competencies for evidence-based policymaking (). Such skills are considered vital in deciding on the best policy options and for problem-solving. In a recent comprehensive study on the integration of public policy appraisal skill training into the curriculum across universities in Canada, a number of challenges and recommendations were identified for the enhancement of more effective training in these institutions (). Research studies continue to illustrate the severe gap between the demand and supply of policy analysis skills within the Canadian government services, the bureaucratic capacity constraints, competencies required, and lessons for management in terms of better analyzing recruitment issues for improving such capabilities within its civil services (Dobuzinskis & Howlett, 2018;Howlett, 2015;Lindquist & Desveaux, 2007). Barriers include the lack of systematic and transparent methodologies, which often lead to many variations in teaching effectiveness across institutions in Canada. Other challenges include low availability and access to research, reliability of findings, timing and cost issues, leading to the risk of bias in research and decision-making. The authors recommend prior to embarking on practicing evidence-based policymaking, the practitioners must acquire specific training competencies in "searching, selecting, appraising, synthesizing, and communicating findings" before joining the workforce, i.e., in university. Efforts to overhaul public policy programs and invigorate them with new innovative approaches to teaching are taking place at universities in North America. Recommended methods such as critical appraisal steps utilizing appraisal checklist steps including systematic and validated tool methodology, knowledge synthesis, and scoping review methods are being experimented with to fulfill the demand in the context of present work environment characterized by information overload, high risk of bias, and information asymmetry in the transfer process. Here the author presents a review of the work at four North American universities to address these various competency-based issues to better prepare students in their career development paths. Table 1 Competency-Based Approaches -Simulated Exercises University or Higher Education Institution Link (s) SIMULATED EXERCISES & ROLE PLAY METHODOLOGIES Simulated Legislative Process: The simulated legislative process is creating an environment that is reflective of the real course. This will help students gain an understanding of how laws are made, draft effective policies to redress specific issues, and apply their critical thinking and problemsolving skills in the process. Heidelberg University Heidelberg Political Science Program Use of "LegSim"-a web-based virtual simulated legislature, where a student receives the opportunity to role-play legislators, develop policy proposals and participate in the decision-making process to enact laws Drake University use of Simulations -Ability to connect to "Model United Nations" (MUN), "Model European Union" & "Model Arab League" simulations which provide students with opportunities to gain practical exposure in negotiating laws and policies relating to international, global issues and to interact with political science students in other countries with diverse and varying cultural backgrounds. Drake University Simulations Undergraduate Political Science students utilize simulation sessions to gain networking, group work, and interaction while at the same time using internships as a valuable resource for career development. Role Play and Simulation Exercises Questions have arisen about the continued lack of program structure, transformation, and direction of the current political science curriculum. Some universities, however, have begun to adopt transformative approaches by experimenting with classroom experience using professional building courses, practical internships, and incorporating innovative teaching and learning practices used by British and American Political Science Associations. In particular, there is the growing use of simulation exercises to increase collaboration across disciplines and departments. One specific way in which this has been accomplished is the use of Model United Nations (MUN), a practice-oriented simulation exercise to facilitate deep learning and enhance professional competencies in political science and international relations fields. This innovative technology-enhanced simulation learning method links key learning objectives to four levels of knowledge, namely facts, concepts, procedural and metacognitive competencies. Lessons learned so far from its application in North American university institutions are that this novel active learning pedagogy is effective in enhancing real-world experiences such as increasing negotiation skills, cooperation, and leadership which are useful in preparing students for career life in diplomacy and foreign affairs (). A somewhat slight variation in the application of MUN simulation exercises in the British higher education system has focused on engaging students as "coproducers of knowledge" (Obendorf & Randerson, 2012). For instance, MUN has been used as a primary teaching tool in the politics and international relations programs at the University of Lincoln in the UK over the last decade. They found that this simulation-driven teaching and learning approach applied in the British HE context was valuable in enhancing "engaged research" and developing students' competencies as "producers of learning and knowledge." In addition, this blending of research skills with the practice of diplomacy is becoming increasingly important in developing students' future career prospects (Obendorf & Randerson, 2012). More recent research examines the establishment or extension of a similar type of curriculum in the UK, focusing on the use of simulation type "action" based teaching approaches as an alternative to traditional classroom political science instruction. The creation of a "Policy Commission" serves to foster approaches that facilitate action-based learning in politics through various activities, which include allowing students to direct and control simulation exercises, volunteering, participating in political campaigns, community and nongovernmental organizations (NGOs), and becoming members of action learning groups (). Students' participation in the recent implementation of the Policy Commission experiment at a select UK university over the period 2013-2016 led to some interesting findings. Students had an increased awareness of the importance of "problem analysis, project management, and communication and presentation skills" on their future career prospects. On a practical level, participation in the Commission allowed students to engage actively, network, and contribute with players on policy and decision-making processes in the community and political fields. This had a positive impact on enhancing their future career prospects (). There is no doubt that the implementation of role-play simulation methods in political science education continues to positively impact students' decision-making, engagement, and motivation levels (;). Innovative Pedagogical Approaches to Delivering Political Science Education The use of Immersive Virtual Reality (IVR) is being tested to investigate its usefulness in increasing motivation and practice in the training transfer process for employees (). The increased use of virtual communication tools and virtual learning technologies seems to be the future direction of learning and training in the higher education and career development fields. Following is the summary of select university institutions currently experimenting with such technology-enhanced and innovative pedagogical approaches. University of Cambridge Cambridge Multidisciplinary approach Fieldwork: Not limited to merely assisting, but they act as collaborators in research. This is useful because it allows students to grasp skills necessary for the workspace in the context of research and problem-solving. MIT Fieldwork Teaching Practical skills: Students are given handson training on skills that are required for the public sector. It increases the employability of the students, thus placing them in a more propitious position which is a shortcoming of only doing theoretical-based work. Cornell University Cornell University-Teaching Practical Skills Internship programs: Many students who are well recognised for excellent performances within such places are often given full-time job opportunities National University of Singapore (NUS) NUS Internship Programs Bi-disciplinary degreestudy money and power: Political Science majors also study Economics. Political Science covers the aspect of power through participation in politics, while economics Kings College Bi-Disciplinary Degree speaks to monetised power. Engaging students in both disciplines is interesting because it teaches students about the two most outstanding ways one can have and maintain control. Internship programs: Often, students leave tertiary level institutions with no form of work experience. Reliance on the theory-based knowledge they would have attained is insufficient for what is required by the workplaces. Conclusion A number of external factors impede the employability and relevance of political science majors as they compete within the sphere of the job market. These may include the drastically reduced supply of labour due to globalization and the onset of the Fourth Industrial Revolution. It is, however, important to recognize that there are internal issues within the pedagogical framework of political science. Universities ought to engage in more in-depth discourses on how to effect the relevant changes within this agenda to make political science majors more attractive on the job market. Additionally, these changes ought to be prioritized and with much urgency to keep up with the constantly evolving world. Meanwhile, the students should not depend on on-campus learning but take a pledge of lifelong learning. The application of a "sustainable" career development framework utilizing a competency-based framework for developing specific skills and competencies relevant for 21st-century political science careers can go a long way in ensuring long-term employability and professional advancement for future students. This paper also finds that universities experimenting and implementing scenario-based, role-play simulation exercises via internal delivery methods or external interaction with other student groups through the Model United Nations concept are better able to prepare students to gain real work exposure in networking, group work, negotiation, and legal arrangements, which are useful in the growing fields of international relations and diplomacy. More importantly, new pedagogical approaches, such as the interdisciplinary, multi-disciplinary, mentoring, and internship methods, are useful in advancing career development prospects for students. In particular, technology-enhanced learning with the adoption of virtual learning environments and virtual communication tools is fast becoming a major driving force in delivering higher education degrees on a global scale. |
package DTO.mappers;
import DTO.models.CoordinateDTO;
import DTO.models.EventDTO;
import DTO.models.EventTypeDTO;
import DTO.models.JamDTO;
import database_v2.exceptions.DataAccessException;
import database_v2.exceptions.RecordNotFoundException;
import java.util.ArrayList;
import java.util.List;
import models.Coordinate;
import models.Transportation;
import models.event.Event;
import models.event.Jam;
import models.event.EventType;
/**
* The class for the Event Mapper. It maps the Event model on the EventDTO model.
*/
public class EventMapper {
private final EventTypeMapper eventTypeMapper = new EventTypeMapper();
private final CoordinateMapper coordinateMapper = new CoordinateMapper();
/**
* Method to convert Event into EventDTO
* @param event The Event to convert
* @param id The id of the event
* @return EventDTO object
*/
public EventDTO convertToDTO(Event event, String id) {
CoordinateDTO coordinateDTO = coordinateMapper.convertToDTO(event.getCoordinates());
EventTypeDTO eventTypeDTO = eventTypeMapper.convertToDTO(event.getType());
List<Jam> jams = event.getAllJams();
JamDTO[] jamsDTO = new JamDTO[jams.size()];
for (int i = 0; i < jams.size(); i++) {
Jam jam = jams.get(i);
List<Coordinate> line = jam.getLineView();
CoordinateDTO[] lineDTO = new CoordinateDTO[line.size()];
for (int j = 0; j < line.size(); j++) {
lineDTO[j] = coordinateMapper.convertToDTO(line.get(j));
}
jamsDTO[i] = new JamDTO(
lineDTO,
jam.getPublicationString(),
jam.getSpeed(), jam.getDelay()
);
}
List<Transportation> transportList = event.getTransportTypes();
String[] relevantTransportTypes = new String[transportList.size()];
for (int i = 0; i < transportList.size(); i++) {
relevantTransportTypes[i] = transportList.get(i).toString();
}
EventDTO eventDTO = new EventDTO(
id,
coordinateDTO,
event.isActive(),
event.getPublicationString(),
event.getLastEditString(),
event.getDescription(),
event.getFormattedAddress(),
jamsDTO,
eventTypeDTO,
relevantTransportTypes
);
return eventDTO;
}
/**
* Method to convert EventDTO into Event
* @param eventDTO EventDTO object to convert into model
* @return Event (model) object
*/
public Event convertFromDTO(EventDTO eventDTO) {
Coordinate coordinate = coordinateMapper.convertFromDTO(eventDTO.getCoordinates());
ArrayList<Transportation> relTrans = new ArrayList<>();
for (String relTransDTO1 : eventDTO.getRelevantTransportationTypes()) {
relTrans.add(Transportation.fromString(relTransDTO1));
}
EventType type = new EventType(eventDTO.getType().getType(), relTrans);
List<Jam> jams = new ArrayList<>();
for(JamDTO jamDTO: eventDTO.getJams()) {
List<Coordinate> line = new ArrayList<>();
for(CoordinateDTO coord: jamDTO.getLine()) {
line.add(coordinateMapper.convertFromDTO(coord));
}
jams.add(new Jam(
jamDTO.getPublicationTime(),
line,
jamDTO.getSpeed(),
jamDTO.getDelay()
));
}
Event out = new Event(
coordinate,
eventDTO.isActive(),
eventDTO.getPublicationTime(),
System.currentTimeMillis(),
eventDTO.getDescription(),
eventDTO.getFormattedAddress(),
type
);
jams.forEach(jam -> {
out.addJam(jam);
});
return out;
}
}
|
Q:
What is the simplest possible circuit for a wireless on-off switch
On one side I have 5V DC powering an LED, and on the other side I have a button (switch/latch/whatever). What's the simplest circuit for this button to control the state of the LED (hopefully without OpAmps)? Range is not important, just the principle. Not including electromagnetic induction...
A:
Simplest way I know is using light, put a phototransistor into the circuit as a low-side switch and shine a light at it, it will turn your LED on. This is good within line of sight. You can make this a fun laser tagging game my shielding the phototransistor so only a head on laser beam would hit it.
If you have to use radio waves within a short range, you can look into TA7642-based AM receiver and a single transistor AM transmitter. Feed the output of TA7642 into a Schmitt trigger and control a MOSFET with the output of it. This is good within up to a few tens of meters, about covering your house and maybe include your neighbor in the range. If you have some elderly living within that range this can be a very Good Samaritan project.
For even wider coverage range, you may be better off using Wi-Fi or 3G and control your trinket using Internet. |
Anyone with information regarding the robbery is asked to call the Elk Grove Police Department’s detective bureau at 916-478-8060 or Crime Alert at 916-443-4357. Callers to Crime Alert can remain anonymous and may be eligible for a reward of up to $1,000. Tips also can be sent via SMS text message by entering 274637 on a cellphone, followed by Tip732 and the message. |
Resolution-Preserving Generative Adversarial Networks for Image Enhancement Generative adversarial networks (GANs) are used for image enhancement such as single image super-resolution (SISR) and deblurring. The conventional GANs-based image enhancement suffers from two drawbacks that cause a quality degradation due to a loss of detailed information. First, the conventional discriminator network adopts strided convolution layers which cause a reduction in the resolution of the feature map, and thereby resulting in a loss of detailed information. Second, the previous GANs for image enhancement use the feature map of the visual geometry group (VGG) network for generating a content loss, which also causes visual artifacts because the maxpooling layers in the VGG network result in a loss of detailed information. To overcome these two drawbacks, this paper presents a proposal of a new resolution-preserving discriminator network architecture which removes the strided convolution layers, and a new content loss generated from the VGG network without maxpooling layers. The proposed discriminator network is applied to the super-resolution generative adversarial network (SRGAN), which is called a resolution-preserving SRGAN (RPSRGAN). Experimental results show that RPSRGAN generates more realistic super-resolution images than SRGAN does, and consequently, RPSRGAN with the new content loss improves the average peak signal-to-noise ratio (PSNR) by 0.75 dB and 0.32 dB for super-resolution images with the scale factors of 2 and 4, respectively. For deblurring, the visual appearance is also significantly improved, and the average PSNR is increased by 1.54 dB when the proposed discriminator and content loss are applied to the deblurring adversarial network. |
When I opened this biography, I was as curious about how David Remnick would pull off a biography of a sitting president—after only one year in office—as I was about Obama himself. Is there more to learn about Obama? Michelle Obama said during the campaign that her husband's life is "an open book. He wrote it and you can read it." She was referring not only to his autobiography, Dreams of My Father, but to the fact that Obama is not a man of many secrets. If there is any remaining mystery, it is probably the result of our own inability to fathom a person who doesn't fit our categories.
The portrait painted by Remnick is quite familiar. He shows us how hard Obama worked to forge his identity, how comfortable and fluent he is in various cultural and linguistic contexts, and how capable he is of weighing the merits of different points of views. He is both comfortable in his own skin and a little touchy about criticism.
Remnick is less interested in Obama's personality than in the cultural moment of his emergence. The title refers to Obama's role as a bridge between the civil rights movement of the 1960s and the new political possibilities for African-American leadership that Obama's rise portends. The book begins with a meditation on a pivotal day in civil rights history, March 7, 1965, known as Bloody Sunday, when civil rights marchers, including John Lewis, attempted to cross Edmund Pettus Bridge in Selma, Alabama, and were attacked by police.
Remnick argues that Obama sees himself as a member of "the Joshua Generation" (the title of Remnick's 2008 piece in the New Yorker), a generation of African-American politicians who inherited the legacy of the civil rights movement but who are different kinds of political leaders than their civil rights heroes. Obama, in this metaphor, is Joshua to John Lewis's Moses—someone who can actually take African Americans to the promised land.
The formulation helps to clarify Obama's complicated relationships to African-American leaders of the older generation, and much of Remnick's book is focused on analyzing these relationships. Some, like Lewis, unequivocally embraced Obama's leadership. Others, like Jesse Jackson, chafed at Obama's style. Obama's careful distancing of himself from identity politics prompted some blacks to complain that Obama is not "black enough" and to suspect that he doesn't really represent African Americans.
Remnick's biography is heavy on the ins and outs of everyday politics. It is lighter on the question of how Obama developed his policies and where his political beliefs come from. That creates a certain imbalance: Obama comes off more as a shrewd politician (which of course he is) and less as a powerful thinker (which I suspect he also is). The reader learns a lot about almost every major political player in Chicago, but nothing, for example, about Reinhold Niebuhr, the theologian who is said to have had major influence on Obama's understanding of politics. I would have liked a greater balance between the two parts of Obama's achievement.
Instead, Remnick embroils his reader in the minutiae and squalor of Obama's campaigns. The reader gains a strong sense of people who are part of Obama's inner circle. David Plouffe, David Axelrod and Valerie Jarrett all make notable appearances. Perhaps the most significant figure besides Obama is Michelle Obama. Remnick shows her to be what her husband says she is: a woman with her feet on the ground and her eyes wide open. When Obama was sworn in as senator in 2005, after having received international attention for his speech at the Democratic convention the year before, Michelle Obama remarked (according to a Chicago Tribune story), "Maybe one day he will do something to warrant all this attention."
She is a reluctant politician's wife. She would have preferred a more private and more stable life, working perhaps in a foundation. Remnick gives a vivid portrait of her frustration during the years that she spent raising small children while her husband, as a state senator, drove around the state of Illinois shaking hands.
Equally striking is the account of how unhappy Obama was as a senator, both in the Illinois state house and the U.S. Senate. Not attracted to the constant socializing and good-ol'-boy traditions of congressional politics, he spent many lonely hours in his hotel room, going over policy notes, watching sports on TV and talking to Michelle on the phone. Remnick tells of Obama sitting in a committee meeting—where he was 18th in line to ask questions—listening to Joe Biden give what Remnick calls a "bloviation." Obama passed a note to an aide that read: "Shoot. Me. Now."
In Remnick's view, Obama was too restless and ambitious to enjoy the slow rhythms of Congress. When the iron of presidential politics was hot—something that for him happened quickly—he struck.
Remnick argues that one of Obama's central contributions has been to change Americans' narrative about themselves. Obama rejects both the "bootstrap" narrative of rugged individualism and the multicultural narrative of America as a collection of various injured parties. He chooses a third alternative in which all Americans can draw on the African-American experience to understand themselves as a people of progress and change, part of a nation in which the harmful past can be transcended and all can work together to create a better future. He draws on the African-American experience not to stress victimization, but to demonstrate that America is continually reaching to become more of what it is meant be.
These are interesting points, and Remnick takes no short cuts in getting us there. But still I wonder: Why did Remnick write this biography now? It seems both too late and too soon. The many details Remnick supplies do not alter the picture of Obama that emerged in the campaign and in his early months as president, and at times Remnick seems unable to decide which of the political details of Obama's rise are most significant. Perhaps it is just too early to tell.
Read Lillian Daniel's review of Game Change by John Heilemann and Mark Halperin. |
Dauntless, a PC cooperative action game that has players working together to take down giant monsters, is entering its closed beta test at 12 p.m. Pacific today. You can get access to the testing period by buying a Founder’s Pack for the game, which start at $40.
Dauntless takes plenty of inspiration from Monster Hunter, Capcom’s series of beast-slaying role-playing games. However, that franchise hasn’t had a presence on PC (at least until Monster Hunter: World comes out in 2018), giving Dauntless a chance to tap that part of the market.
The final release of Dauntless will be free-to-play at some point in 2017. The Founder’s Packs give you access to the closed beta and some bonus in-game items. These betas, or Early Access, have become a popular way for developers to get people playing (and paying) for their work without having to commit to a final release. It’s a formula that has made modern PC hits like PlayerUnknown’s Battlegrounds huge successes. |
A federal judge has agreed to put off a trial involving Visto's patent-infringement claims against Research In Motion, but limited RIM's ability to cause further delays.
The trial over mobile e-mail provider Visto's lawsuit against RIM had been set to begin next week. Visto sued RIM in 2006 in the U.S. District Court for the Eastern District of Texas, claiming its popular BlackBerry system infringed four Visto patents and asking for a shutdown of RIM's service as well as damages. But on Wednesday, Magistrate Judge Charles Everingham granted a stay of the trial, requested by RIM, because several of the patent claims involved are being re-examined by the U.S. Patent and Trademark Office.
RIM had requested the re-examinations, in which the patent office is studying the validity of certain parts of Visto's patents. But as a condition of the stay, the company can't ask for any more re-examinations, either directly or indirectly, the judge wrote. RIM also won't be allowed to challenge the validity of any of the patents during the trial by bringing up evidence that has already been considered in the re-examinations.
Earlier this week, the patent office validated 21 out of 22 claims in one of those patents, number 7,039,679, which involves technology for synchronizing e-mail between a mobile device and a LAN server.
Mobile e-mail, based on complex sets of technologies and rapidly growing in popularity, has been fertile ground for patent disputes. RIM came to the brink of a service shutdown in 2006 before settling a suit brought by NTP for US$612.5 million. Visto has also aggressively defended its intellectual property, suing competitors including Good Technology, Seven and Microsoft. |
def do_send_file_to_device(self, file_content, file_info):
self._switches_state = list("0000000000")
extension = file_info
if extension == "vhd":
try:
if self._vhd_allowed == False:
self._current_state = STATE_NOT_ALLOWED
return "STATE=" + STATE_NOT_ALLOWED
if DEBUG: print "[DBG]: File received: Info: " + file_info
self._handle_vhd_file(file_content, file_info)
return "STATE=" + STATE_SYNTHESIZING
except Exception as ex:
if DEBUG: print "EXCEPTION: " + ex
raise ex
else:
if not self._bit_allowed:
self._current_state = STATE_NOT_ALLOWED
return "STATE=" + STATE_NOT_ALLOWED
self._programming_thread = self._program_file_t(file_content)
return "STATE=" + STATE_PROGRAMMING |
In America, there’s a sense that hockey is a niche sport. There’s a large following, sure, but the NHL usually does not garner the amount of coverage or respect of the other pro sports in the country. This is why many hockey fans have an inferiority complex and it’s the reason #PleaseLikeMySport exists.
It’s also why hockey fans get downright giddy when they see a hockey reference in a movie or TV show or when they find out a celebrity is a hockey fan. It gives hockey fans a chance to say, “Look over there! This thing that I like is popular!”
NHL in Pop Culture
Believe it or not, fans have been treated to some nice hockey easter eggs in pop culture over the years. Wayne Gretzky was once in an episode of the popular soap opera The Young and the Restless in 1981 as a member of the mafia and had one line: “I’m Wayne from the Edmonton operation.”
Cam Neely has made several TV show appearances in addition to making a famous cameo in the hit comedy Dumb and Dumber.
Film producer and Toronto-native Mike Myers threw in a good amount of Toronto Maple Leafs references in his Austin Powers movies and later made The Love Guru, a hockey movie in which the Maple Leafs won the Stanley Cup (it’s also a movie that has a 14% rating on Rotten Tomato).
Full House, Happy Gilmore, 30 Rock, and Swingers also had hockey fans smiling because of their inclusion. And who could ever forget that famous episode in the hit TV sitcom Seinfeld?
That episode, aptly titled “The Face Painter,” at one point showed one of the characters wearing a red #30 Martin Brodeur jersey.
In Martin Brodeur’s book, Brodeur: Beyond the Crease, Brodeur wrote what it meant to him at that point.
Sports Illustrated published a cover questioning whether the NHL had supplanted the NBA as the hottest pro sport on the North American landscape. Having an NHL sweater featured on one of the most famous episodes of one of the best-loved American sitcoms of all time a year later was evidence, to some degree, of the potential for growth that was available to the NHL at that time.
And that’s what it means for hockey fans. They see the potential for their sport and they want to the see the game grow. And pop culture is one way to see that growth occur.
Pixar’s Inside Out
Hockey fans, once again, have reason to rejoice.
In Pixar’s upcoming film, Inside Out, there appears to be some hockey coming our way.
Here’s the trailer.
We see the main character, Riley, playing hockey at different points in the trailer. Clearly she’s a young kid in love with the sport and it seems like hockey will play a pretty big role in this movie.
Also, at the :19 mark, the father is shown spacing out, playing over an old hockey game in his head. (At first, I figured it’d be a game from the ’70’s based on the grainy footage. But upon further examination and based on this article about the history of the NHL ice, it’s hard to tell what era this game took place in. The crease is wide and blue and the ice in the net is white which is the style from 1996-1999. But the video also shows that there was a trapezoid behind the net, which we all know was instituted in 2006. I’ll have to investigate this further a different time.)
Pixar’s Plot Summary:
Growing up can be a bumpy road, and it’s no exception for Riley, who is uprooted from her Midwest life when her father starts a new job in San Francisco. Like all of us, Riley is guided by her emotions – Joy, Fear, Anger, Disgust and Sadness. The emotions live in Headquarters, the control center inside Riley’s mind, where they help advise her through everyday life. As Riley and her emotions struggle to adjust to a new life in San Francisco, turmoil ensues in Headquarters. Although Joy, Riley’s main and most important emotion, tries to keep things positive, the emotions conflict on how best to navigate a new city, house and school.
So I think it’s pretty obvious that the “Midwest” state Riley moving from is Minnesota. We’re shown a memory of her playing pond hockey with her parents and that’s basically Minnesota in a nutshell. Furthermore, the writer and director, Peter Docter, is a Minnesota native and is a graduate of University of Minnesota. He also mentioned that the main character was created in the image of his daughters and their friends.
.@iTunesTrailers For Riley we looked at our daughters & their friends. People think we make this stuff up. Most is real life! #AskInsideOut — #InsideOut (@PixarInsideOut) December 10, 2014
Docter’s previous movies (which include Up, Toy Story, and Monsters Inc.) were all successes and people are expecting another slam dunk out of him. It’s a creative movie and it’s an idea that’s never really been done before. People will/should be flocking to the theatres to see this.
Also, Amy Poehler, Mindy Kaling, and Bill Hader.
And best of all, there’s hockey in it. And, more importantly, a girl playing hockey.
There’s been some issues recently with sexism and misogyny in the hockey world (as detailed by Puck Daddy last week). Whether it’s fans chanting “Katy Perry,” or Morgan Rielly using the expression “You’re not here to be a girl about it,” girls are constantly being mistreated and thrown aside as an afterthought in the NHL world. Seeing a movie with a girl in love with hockey is certainly a breath of fresh air.
[RELATED: The National Women’s Hockey League: Impatience Is a Virtue]
Another subplot which I find interesting is that this hockey player is moving to California. California, as you may have realised, has been having a bit of a hockey resurgence over the past few seasons. With LA winning the Cup twice in the past 3 years and with Anaheim and San Jose (at least until recently) being top teams, California has found its way onto the map. We even had California native Emerson Etem scoring a goal for the Ducks in game 4 in the Winnipeg series.
As hockey fans we should appreciate the attention our sport receives. While the #PleaseLikeMySport campaign comes off as kind of desperate and whiny, we have the right to be proud of the progress of the league and the sport and we should be able to let others know about it.
The movie is set to be released in theatres June 19th. I expect to see you all there. |
Fungal susceptibility testing. The utility of antifungal susceptibility testing has not been broadly determined. Thus, susceptibility testing of fungal isolates is not recommended on a routine basis. For instance, susceptibilty testing may be considered for some Candida species and for patients with Pseudallescheria boydii infections. Testing of yeasts for susceptiblity to azoles is of particular value due to their variability in response to these agents. It may also be important to test the susceptibility of new fungal organisms not previously identified or known to cause human disease because in these situations there are no clinical reports of efficacy to guide the choice of antifungal therapy. |
import tensorflow as tf
from accord.agents.models import Probnet, Ampnet
class AmpDistAgent(tf.Module):
def __init__(self,
action_len,
vsize=5,
dense=512,
supportsize=51,
vmin=-10.0,
vmax=10.0,
starteps=1.0,
lr=1e-4,
adameps=1.5e-4,
name="distagent"):
super(AmpDistAgent, self).__init__(name=name)
self.action_len = action_len
self.optimizer = tf.keras.optimizers.Adam(learning_rate=lr,
epsilon=adameps)
self.losses = tf.keras.losses.KLDivergence(
reduction=tf.keras.losses.Reduction.NONE)
self.kldloss = tf.keras.losses.KLDivergence()
with tf.name_scope("probnet"):
self.probnet = Probnet(action_len=action_len,
dense=dense,
supportsize=supportsize)
with tf.name_scope("tnet"):
self.ampnet = Ampnet(action_len=action_len,
vsize=vsize,
dense=dense)
with tf.name_scope("selfnet"):
self.selfnet = Probnet(action_len=action_len,
dense=dense,
supportsize=supportsize)
self.supp = tf.constant(tf.linspace(vmin, vmax, supportsize),
shape=(supportsize, 1))
self.dz = tf.constant((vmax - vmin) / (supportsize - 1))
self.vmin = tf.constant(vmin)
self.vmax = tf.constant(vmax)
self.supportsize = tf.constant(supportsize, dtype=tf.int32)
self.eps = tf.Variable(starteps, trainable=False, name="epsilon")
@tf.function
def eps_greedy_action(self, state, epsval):
self.eps.assign(epsval)
dice = (tf.random.uniform([1], minval=0, maxval=1, dtype=tf.float32) <
self.eps)
raction = tf.random.uniform([1],
minval=0,
maxval=self.action_len,
dtype=tf.int64)
qaction = tf.argmax(self.qvalues(state))
return tf.where(dice, raction, qaction)
@tf.function
def amp_action(self, state, epsval):
self.eps.assign(epsval)
dice = (tf.random.uniform([1], minval=0, maxval=1, dtype=tf.float32) <
self.eps)
raction = tf.random.uniform([1],
minval=0,
maxval=self.action_len,
dtype=tf.int64)
qaction = tf.argmax(self.t_qvalues(state))
return tf.where(dice, raction, qaction)
@tf.function
def probvalues(self, states):
return tf.squeeze(self.probnet(states))
@tf.function
def qvalues(self, states):
ds = self.probnet(states)
return tf.squeeze(tf.matmul(ds, self.supp))
@tf.function
def t_probvalues(self, states):
return self.ampnet(states)
@tf.function
def t_qvalues(self, states):
ds = self.ampnet(states)
return tf.squeeze(tf.matmul(ds, self.supp))
@tf.function
def s_probvalues(self, states):
return self.selfnet(states)
@tf.function
def s_qvalues(self, states):
ds = self.selfnet(states)
return tf.squeeze(tf.matmul(ds, self.supp))
# @tf.function
def update_target(self, wlst):
self.ampnet.update(wlst)
@tf.function
def train(self, states, actions, drews, gexps, endstates, dones):
with tf.GradientTape() as tape:
batch_size = tf.shape(states)[0]
brange = tf.range(0, batch_size)
indices = tf.stack([brange, actions], axis=1)
chosen_dists = tf.gather_nd(self.probvalues(states), indices)
end_actions = tf.cast(tf.argmax(self.t_qvalues(endstates), axis=1),
dtype=tf.int32)
indices = tf.stack([brange, end_actions], axis=1)
chosen_end_dists = tf.gather_nd(self.t_probvalues(endstates),
indices)
dmask = (1.0 - dones) * gexps
Tzs = tf.clip_by_value(drews + dmask * self.supp, self.vmin,
self.vmax)
Tzs = tf.transpose(Tzs)
bs = (Tzs - self.vmin) / self.dz
ls = tf.cast(tf.floor(bs), tf.int32)
us = tf.cast(tf.math.ceil(bs), tf.int32)
condl = tf.cast(
tf.cast((us > 0), tf.float32) * tf.cast(
(us == ls), tf.float32), tf.bool)
condu = tf.cast(
tf.cast((ls < self.supportsize - 1), tf.float32) * tf.cast(
(us == ls), tf.float32), tf.bool)
ls = tf.where(condl, ls - 1, ls)
us = tf.where(condu, us + 1, us)
luprob = (tf.cast(us, tf.float32) - bs) * chosen_end_dists
lshot = tf.one_hot(ls, self.supportsize)
ml = tf.einsum('aj,ajk->ak', luprob, lshot)
ulprob = (bs - tf.cast(ls, tf.float32)) * chosen_end_dists
ushot = tf.one_hot(us, self.supportsize)
mu = tf.einsum('aj,ajk->ak', ulprob, ushot)
target = ml + mu
losses = self.losses(target, chosen_dists)
# Kullback–Leibler divergence
loss = self.kldloss(tf.stop_gradient(target), chosen_dists)
gradients = tape.gradient(loss, self.probnet.trainable_variables)
gradients, _ = tf.clip_by_global_norm(gradients, 10.0)
self.optimizer.apply_gradients(
zip(gradients, self.probnet.trainable_variables))
return losses
def save(self, filestr):
self.probnet.save_weights(filestr)
def load(self, filestr):
self.probnet.load_weights(filestr)
|
import java.util.ArrayList;
import java.util.Comparator;
import java.util.PriorityQueue;
import java.util.Scanner;
public class thirdProg {
static PriorityQueue<pair> Q = new PriorityQueue<pair>(10, new pairComp());
static void DFS(int i, int n, ArrayList<Integer>[] adjList, int parent[], int dist[], int children[]) {
for (int j = 0; j < adjList[i].size(); j++) {
int child = adjList[i].get(j);
if (child == parent[i])
continue;
parent[child] = i;
children[i]++;
dist[child] = dist[i] + 1;
DFS(child, n, adjList, parent, dist, children);
}
if (children[i] == 0) {
Q.add(new pair(dist[i], i));
}
}
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
int n = sc.nextInt(), k = sc.nextInt();
ArrayList[] adjList = new ArrayList[n];
for (int i = 0; i < n; i++) {
adjList[i] = new ArrayList<Integer>();
}
for (int i = 0; i < n - 1; i++) {
int u = sc.nextInt() - 1;
int v = sc.nextInt() - 1;
adjList[u].add(v);
adjList[v].add(u);
}
int parent[] = new int[n];
int dist[] = new int[n];
int subtree[] = new int[n];
int children[] = new int[n];
dist[0] = 0;
DFS(0, n, adjList, parent, dist, children);
long ans = 0;
while (k != 0 && !Q.isEmpty()) {
pair selected = Q.poll();
int parSelected = parent[selected.node];
ans += selected.val;
k--;
subtree[parSelected] += subtree[selected.node] + 1;
children[parSelected]--;
if (children[parSelected] == 0) {
Q.add(new pair(dist[parSelected] - subtree[parSelected], parSelected));
}
}
System.out.println(ans);
}
}
class pair {
int val, node;
pair(int d, int n) {
val = d;
node = n;
}
}
class pairComp implements Comparator<pair> {
@Override
public int compare(pair o1, pair o2) {
return o2.val - o1.val;
}
} |
Esther Mbulakubuza Mbayo
Background and education
She was born in present-day Luuka District, in Busoga sub-region, in the Eastern Region of Uganda, on 27 April 1971. She studied at Wanyange Girls' School for both her O-Level and A-Level education. She attended Makerere University, graduating in 2005, with a Bachelor of Commerce, with specialization in accounting. She also holds a certificate awarded by the Institute of Chartered Secretaries and Administrators.
Career
In 1997, she served as an internal auditor for Transocean Uganda Limited. From November 1999 until 2002, she worked as an accounts assistant at Lonrho Motors Uganda Limited, a private automobile dealership in Kampala. From January 2003 until February 2006, she worked as an accountant at Lonrho Motors. She then went to work at Commercial Firms Uganda Limited as an accountant, from April 2006 until August 2007. Concomitantly, from September 2001 until June 2008, she worked as an accountant at Socket Works Uganda Limited. From February 2008 until December 2010, she worked as a financial controller at Cooper Motor Corporation Uganda. She was elected as the Luuka District Woman member of parliament at the 2016 general election, defeating incumbent Evelyn Kaabule. She had earlier defeated Kaabule in the NRM primary. On 6 June 2016, she was appointed Cabinet Minister of the Presidency.
Other responsibilities
She concurrently serves as the Chairperson of the district Women's League in the ruling National Resistance Movement political party and as Secretary of Busoga Women Leader’s Association. She is married to George Mbayo and they are parents of three children. |
/// Creates a new TCP connection from a TCP stream instance.
pub fn new(socket: UdpSocket, peer_addr: SocketAddr) -> Self {
Self {
socket: Arc::new(socket),
peer_addr,
}
} |
<reponame>google-ar/chromium<gh_stars>100-1000
// Copyright 2016 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef CC_TREES_SWAP_PROMISE_MANAGER_H_
#define CC_TREES_SWAP_PROMISE_MANAGER_H_
#include <set>
#include <vector>
#include "base/macros.h"
#include "cc/base/cc_export.h"
#include "cc/output/swap_promise.h"
namespace cc {
class SwapPromise;
class SwapPromiseMonitor;
class CC_EXPORT SwapPromiseManager {
public:
SwapPromiseManager();
~SwapPromiseManager();
// Call this function when you expect there to be a swap buffer.
// See swap_promise.h for how to use SwapPromise.
void QueueSwapPromise(std::unique_ptr<SwapPromise> swap_promise);
// When a SwapPromiseMonitor is created on the main thread, it calls
// InsertSwapPromiseMonitor() to register itself with LayerTreeHost.
// When the monitor is destroyed, it calls RemoveSwapPromiseMonitor()
// to unregister itself.
void InsertSwapPromiseMonitor(SwapPromiseMonitor* monitor);
void RemoveSwapPromiseMonitor(SwapPromiseMonitor* monitor);
// Called when a commit request is made on the LayerTreeHost.
void NotifySwapPromiseMonitorsOfSetNeedsCommit();
// Called before the commit of the main thread state will be started.
void WillCommit();
// The current swap promise list is moved to the caller.
std::vector<std::unique_ptr<SwapPromise>> TakeSwapPromises();
// Breaks the currently queued swap promises with the specified reason.
void BreakSwapPromises(SwapPromise::DidNotSwapReason reason);
size_t num_queued_swap_promises() const { return swap_promise_list_.size(); }
private:
std::vector<std::unique_ptr<SwapPromise>> swap_promise_list_;
std::set<SwapPromiseMonitor*> swap_promise_monitors_;
DISALLOW_COPY_AND_ASSIGN(SwapPromiseManager);
};
} // namespace cc
#endif // CC_TREES_SWAP_PROMISE_MANAGER_H_
|
<reponame>skonto/eventing-kafka
/*
Copyright 2021 The Knative Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package upgrade_test
import (
"testing"
"github.com/stretchr/testify/assert"
"knative.dev/eventing-kafka/test/upgrade"
)
func TestSuite(t *testing.T) {
s := upgrade.Suite()
assert.NotEmpty(t, s.Tests.Continual)
assert.NotEmpty(t, s.Tests.PostDowngrade)
assert.NotEmpty(t, s.Tests.PreUpgrade)
assert.NotEmpty(t, s.Tests.PostUpgrade)
assert.NotEmpty(t, s.Installations.Base)
assert.NotEmpty(t, s.Installations.UpgradeWith)
assert.NotEmpty(t, s.Installations.DowngradeWith)
}
|
<filename>src/pages/__test__/LoginPage.test.tsx
import { render, screen } from '@testing-library/react';
import LoginPage from '../LoginPage';
import { BrowserRouter } from 'react-router-dom';
import AppContextProvider from '../../context/AppContext';
import userEvent from '@testing-library/user-event';
const MockLoginPage = () => {
return (
<AppContextProvider>
<BrowserRouter>
<LoginPage />
</BrowserRouter>
</AppContextProvider>
);
};
describe('LoginPage', () => {
it('Renders without crashing', () => {
render(<MockLoginPage />);
});
it('Renders a heading with the text Sign in', () => {
render(<MockLoginPage />);
const headingElement = screen.getByRole('heading');
expect(headingElement).toHaveTextContent('Sign in');
});
it('Renders a paragraf with the text Sign in and start shopping!', () => {
render(<MockLoginPage />);
const paragrafElement = screen.getByText('Sign in and start shopping!');
expect(paragrafElement).toBeInTheDocument();
});
it('Renders a form', () => {
render(<MockLoginPage />);
const formElement = screen.getByTitle('login-form');
expect(formElement).toBeInTheDocument();
});
it('Renders an inpufield with the placeholder text (Username)', () => {
render(<MockLoginPage />);
const inputElement = screen.getByPlaceholderText('Username');
expect(inputElement).toBeInTheDocument();
});
it('Renders an inpufield with the placeholder text (Password)', () => {
render(<MockLoginPage />);
const inputElement = screen.getByPlaceholderText('Password');
expect(inputElement).toBeInTheDocument();
});
it('Renders a button with the text (Login)', () => {
render(<MockLoginPage />);
const buttonElement = screen.getByRole('button');
expect(buttonElement).toBeInTheDocument();
});
it('The input value equal the users typed input', () => {
render(<MockLoginPage />);
const inputUserElement = screen.getByPlaceholderText('Username');
userEvent.type(inputUserElement, 'User123');
expect(inputUserElement).toHaveValue('User123');
});
});
|
# 装饰器
import time
def timer(func):
def deco(*args, **kwargs):
func(*args, **kwargs)
print("执行完毕")
return deco
@timer
def test1():
print("Hi test1")
test1()
def login(type):
def outerwrapper(func):
def wrapper(*args, **kwargs):
func(*args, **kwargs)
print("获取到参数:"+type)
for value in args:
print(value)
for key, value in kwargs:
print(value + " " + value)
return wrapper
return outerwrapper
@login("internet")
def loginChaeck(name, pwd):
print("loginChaeck")
loginChaeck("abc", "<PASSWORD>")
# 列表生成式
a = [i * 2 for i in range(10)]
print(a)
def func(i):
return i * 3
b = [func(i) for i in range(10)]
print(b)
# 生成器 只有在调用的时候才会生成相应的数据
# 只记录当前位置
# 只有一个__next__()方法
c = (func(i) for i in range(10))
print(c)
valueC = c.__next__()
print(valueC)
valueC = c.__next__()
print(valueC)
def fib(max):
n, aValue, bVable = 0, 0, 1
# 相当于
# t = (0, 0, 1)
# n = t[0] aValue = t[1] bVable = t[2]
while n < max:
print(bVable)
aValue, bVable = bVable, aValue + bVable
n = n + 1
fib(10)
# 变成生成器
def fib1(max):
n, aValue, bVable = 0, 0, 1
# 相当于
# t = (0, 0, 1)
# n = t[0] aValue = t[1] bVable = t[2]
while n < max:
yield bVable
aValue, bVable = bVable, aValue + bVable
n = n + 1
# 异常时候使用
return "---done---"
d = fib1(10)
print(d)
valueD = d.__next__()
print(valueD)
valueD = d.__next__()
print(valueD)
while True:
try:
d.__next__()
except StopIteration as e:
print(e.value)
break
def consumer(name):
print("%s 开始吃包子了" % name)
while True:
baozi = yield
print("包子%s来了,被%s吃了" % (baozi, name))
cs = consumer("PQ")
cs.__next__()
# 可以发送数值并唤起yield所在方法
cs.send("small baozi")
# 使用iter可以将list dist str等Iterable转换成Iterator 迭代器
e = iter({1, 2, 3, 4})
eValue = e.__next__()
print(eValue)
# 内置函数
f = [1, 0, -1]
# 所有元素全部为true即为true
print(all(f))
# 只要一个元素为true即为true
print(any(f))
# 十进制转二进制
print(bin(10))
# 判断元素是否为true
print(bool(1))
print(bool(0))
print(bool(-1))
# 字节数组
b1 = bytes("abcde", encoding="utf-8")
print(b1)
b2 = bytearray("abcde", encoding="utf-8")
b2[0] = 50
print(b2)
# 通过数字返回相对应的ASCII
print(chr(100))
# 通过ASCII返回相对应的数字
print(ord('b'))
# exec执行程序
code = """
for i in range(10):
print(i)
"""
exec(compile(code, "", "exec"))
code1 = """
1+2/1*6
"""
com = compile(code1, "", "eval")
print(com)
eval(compile(code1, "", "eval"))
# 显示对象有哪些方法可以使用
print(dir([]))
# 将字符串转换成列表、Set、元组、字典
gValue = "[1,2,3,4,5]"
print(type(eval(gValue)))
gValue = "{1,2,3,4,5}"
print(type(eval(gValue)))
gValue = "(1,2,3,4,5)"
print(type(eval(gValue)))
gValue = "{'key1':'value1'}"
print(type(eval(gValue)))
# 匿名函数只能执行简单的三元运算
cal = lambda a, b: a if a > b else b
print(cal(1, 2))
# 过滤
filt = filter(lambda n: n > 5, range(10))
print(filt)
for i in filt:
print(i)
# 转换
ma = map(lambda n: n * n, range(10))
print(ma)
for i in ma:
print(i)
# json序列化与反序列化
import json
infoValue = {"key1": "value1", "key2": "value2"}
# f1 = open("day4.txt", "w")
# # f1.write(str(infoValue))
# f1.write(json.dumps(infoValue))
# f1.close()
# f2 = open("day4.txt", "r")
# print(eval(f2.read()))
# print(json.loads(f2.read()))
# f2.close()
# pickle对复杂对象进行序列化与反序列化
import pickle
def pickletest():
print("Hello")
infoValue2 = {"key1": "value1", "key2": "value2", "func": pickletest}
# f1 = open("day4.txt", "wb")
# # f1.write(pickle.dumps(infoValue2))
# pickle.dump(infoValue2, f1)
# f1.close()
f2 = open("day4.txt", "rb")
# data = pickle.loads(f2.read())
data = pickle.load(f2)
data["func"]()
f2.close()
# 项目路径
import os
print(os.path.dirname(os.path.abspath(__file__)))
|
<filename>jpa/eclipselink.jpa.test/src/org/eclipse/persistence/testing/models/jpa21/advanced/LargeProject.java
/*******************************************************************************
* Copyright (c) 2012, 2013 Oracle and/or its affiliates. All rights reserved.
* This program and the accompanying materials are made available under the
* terms of the Eclipse Public License v1.0 and Eclipse Distribution License v. 1.0
* which accompanies this distribution.
* The Eclipse Public License is available at http://www.eclipse.org/legal/epl-v10.html
* and the Eclipse Distribution License is available at
* http://www.eclipse.org/org/documents/edl-v10.php.
*
* Contributors:
* 02/08/2012-2.4 <NAME>
* - 350487: JPA 2.1 Specification defined support for Stored Procedure Calls
* 01/23/2013-2.5 <NAME>
* - 350487: JPA 2.1 Specification defined support for Stored Procedure Calls
******************************************************************************/
package org.eclipse.persistence.testing.models.jpa21.advanced;
import javax.persistence.DiscriminatorValue;
import javax.persistence.Entity;
import javax.persistence.FetchType;
import javax.persistence.JoinColumn;
import javax.persistence.ManyToOne;
import javax.persistence.Table;
@Entity
@Table(name="JPA21_LPROJECT")
@DiscriminatorValue("L")
public class LargeProject extends Project {
private double m_budget;
protected Employee executive;
public LargeProject() {
super();
}
public LargeProject(String name) {
this();
this.setName(name);
}
public double getBudget() {
return m_budget;
}
public void setBudget(double budget) {
this.m_budget = budget;
}
/**
* @return the executive
*/
@ManyToOne(fetch=FetchType.LAZY)
@JoinColumn(name="EXEC_ID")
public Employee getExecutive() {
return executive;
}
/**
* @param executive the executive to set
*/
public void setExecutive(Employee executive) {
this.executive = executive;
}
}
|
/**
* Create and use quadratic features <p>
*
* @param firstNameSpace namespace or ":" for any
* @param secondNamespace namespace or ":" for any
* @return builder
*/
@Override
public Builder quadratic(final String firstNameSpace, final String secondNamespace) {
addParameter("--quadratic", firstNameSpace.charAt(0) + "" + secondNamespace.charAt(0));
return this;
} |
package reactor
import (
"fmt"
"os"
"regexp"
"time"
)
const (
DATE_FORMAT = "2006-01-02 15:04:05.000"
KEY_PATTERN = <KEY>"
VALUE_MAX = 2048
)
var keyPattern, _ = regexp.Compile(KEY_PATTERN)
func mergeProperties(target, source map[string]interface{}) {
for k, v := range source {
target[k] = v
}
}
func extractTime(p map[string]interface{}) string {
if t, ok := p["#time"]; ok {
delete(p, "#time")
v, ok := t.(time.Time)
if !ok {
fmt.Fprintln(os.Stderr, "Invalid data type for #time")
return time.Now().Format(DATE_FORMAT)
}
return v.Format(DATE_FORMAT)
}
return time.Now().Format(DATE_FORMAT)
}
func extractUUID(p map[string]interface{}) string {
if t, ok := p["#uuid"]; ok {
delete(p, "#uuid")
v, ok := t.(string)
if !ok {
fmt.Fprintln(os.Stderr, "Invalid data type for #uuid")
}
return v
}
return ""
}
func extractIp(p map[string]interface{}) string {
if t, ok := p["#ip"]; ok {
delete(p, "#ip")
v, ok := t.(string)
if !ok {
fmt.Fprintln(os.Stderr, "Invalid data type for #ip")
return ""
}
return v
}
return ""
}
func isNotNumber(v interface{}) bool {
switch v.(type) {
case int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64:
case float32, float64:
default:
return true
}
return false
}
func checkPattern(name []byte) bool {
return keyPattern.Match(name)
}
|
def compute(self, A1, A2):
S1 = self.InverseMatrix(A1)
S2 = self.InverseMatrix(A2)
d = 0
for i in range(A1.shape[0]):
for j in range(A1.shape[0]):
d += (sqrt(S1.tocsr()[(i, j)]) - sqrt(S2.tocsr()[(i, j)])) ** 2
d = sqrt(d)
sim = 1 / (1 + d)
return 1 - sim |
// THIS FILE IS AUTO-GENERATED
use crate::{
characteristic::{brightness, name, programmable_switch_event, volume, HapCharacteristic},
service::{HapService, Service},
HapType,
};
/// Doorbell Service.
pub type Doorbell = Service<DoorbellInner>;
impl Default for Doorbell {
fn default() -> Doorbell { new() }
}
/// Inner type of the Doorbell Service.
#[derive(Default)]
pub struct DoorbellInner {
/// ID of the Doorbell Service.
id: u64,
/// `HapType` of the Doorbell Service.
hap_type: HapType,
/// Specifies if the Service is hidden.
hidden: bool,
/// Specifies if the Service is the primary Service of the Accessory.
primary: bool,
/// Programmable Switch Event Characteristic.
pub programmable_switch_event: programmable_switch_event::ProgrammableSwitchEvent,
/// Brightness Characteristic.
pub brightness: Option<brightness::Brightness>,
/// Volume Characteristic.
pub volume: Option<volume::Volume>,
/// Name Characteristic.
pub name: Option<name::Name>,
}
impl HapService for DoorbellInner {
fn get_id(&self) -> u64 { self.id }
fn set_id(&mut self, id: u64) { self.id = id; }
fn get_type(&self) -> HapType { self.hap_type }
fn get_hidden(&self) -> bool { self.hidden }
fn set_hidden(&mut self, hidden: bool) { self.hidden = hidden; }
fn get_primary(&self) -> bool { self.primary }
fn set_primary(&mut self, primary: bool) { self.primary = primary; }
fn get_characteristics(&self) -> Vec<&dyn HapCharacteristic> {
let mut characteristics: Vec<&dyn HapCharacteristic> = vec![&self.programmable_switch_event];
if let Some(c) = &self.brightness {
characteristics.push(c);
}
if let Some(c) = &self.volume {
characteristics.push(c);
}
if let Some(c) = &self.name {
characteristics.push(c);
}
characteristics
}
fn get_mut_characteristics(&mut self) -> Vec<&mut dyn HapCharacteristic> {
let mut characteristics: Vec<&mut dyn HapCharacteristic> = vec![&mut self.programmable_switch_event];
if let Some(c) = &mut self.brightness {
characteristics.push(c);
}
if let Some(c) = &mut self.volume {
characteristics.push(c);
}
if let Some(c) = &mut self.name {
characteristics.push(c);
}
characteristics
}
}
/// Creates a new Doorbell Service.
pub fn new() -> Doorbell {
Doorbell::new(DoorbellInner {
hap_type: HapType::Doorbell,
programmable_switch_event: programmable_switch_event::new(),
..Default::default()
})
}
|
<reponame>niall-twomey/har_datasets<filename>src/models/ensembles.py
from typing import Dict
from typing import List
from typing import Optional
from typing import Sized
from typing import Union
import numpy as np
from sklearn.base import BaseEstimator
from sklearn.preprocessing import LabelEncoder
class PrefittedVotingClassifier(BaseEstimator):
def __init__(
self,
estimators: List[Union[BaseEstimator]],
voting: str = "soft",
weights: Optional[Sized] = None,
verbose: bool = False,
strict: bool = True,
):
assert weights is None or len(weights) == len(estimators)
self.estimators = estimators
self.voting = voting
self.weights = weights
self.verbose = verbose
self.strict = strict
self.le_ = None
self.classes_ = None
def transform(self, X):
weights = self.weights
if weights is None:
weights = np.ones(len(self.estimators)) / len(self.estimators)
return [est.predict_proba(X) * ww for ww, (_, est) in zip(weights, self.estimators)]
def predict_proba(self, X):
return sum(self.transform(X))
def predict(self, X):
probs = self.predict_proba(X)
inds = np.argmax(probs, axis=1)
return self.classes_[inds]
def fit(self, X, y, sample_weight=None):
self.le_ = LabelEncoder().fit(y)
self.classes_ = self.le_.classes_
for name, est in self.estimators:
if self.strict:
assert np.all(
est.classes_ == self.classes_
), f"Model classes ({self.classes_}) not aligned with {name}: {est.classes_=}"
return self
def score(self, X, y):
return np.mean(self.predict(X) == y)
class ZeroShotVotingClassifier(PrefittedVotingClassifier):
def __init__(
self,
estimators: List[Union[BaseEstimator]],
label_alignment: Dict[str, str],
voting: str = "soft",
weights: Optional[Sized] = None,
verbose: bool = False,
):
super().__init__(estimators=estimators, voting=voting, weights=weights, verbose=verbose, strict=False)
self.label_alignment = label_alignment
def predict_proba(self, X):
out = np.zeros((X.shape[0], self.classes_.shape[0]))
self_lookup = dict(zip(self.classes_, range(len(self.classes_))))
for (_, estimator), transformed in zip(self.estimators, self.transform(X)):
for fi, (name, col) in enumerate(zip(estimator.classes_, transformed.T)):
out[:, self_lookup[self.label_alignment[name]]] += col
return out
def predict(self, X):
probs = self.predict_proba(X)
inds = np.argmax(probs, axis=1)
return self.classes_[inds]
|
The California Supreme Court on Thursday unanimously overturned the first-degree murder conviction of a man who stole appliances and caused a fatal accident an hour later when a stove fell off his truck.
Cole Allen Wilkins of Long Beach was convicted under the "felony-murder rule," which says a defendant may be convicted of first-degree murder if someone dies while the suspect is committing a felony, such as a burglary or rape. Intention to kill is not required for conviction.
Relying on that rule, an Orange County jury convicted Wilkins in 2008 of first-degree murder because he stole appliances, a felony, and caused the death of Los Angeles County Sheriff's Deputy David Piquette when a stove fell onto the road.
Piquette, who was driving to work from his home in Corona, was killed when he swerved to avoid the stove on the 91 Freeway in Anaheim and was crushed by a cement truck.
The judge had instructed the jury that Wilkins, then 32, could be found guilty of murder if the fatal accident and the burglary were part of a “continuous transaction.” The jury convicted, and Wilkins was sentenced to 25 years to life.
The state high court overturned Wilkins’ conviction on the grounds the jury had not been instructed properly. If a perpetrator of a felony has already escaped and reached a “temporary place of safety,” any death he then causes is not felony murder, the court said.
“The prosecution did not dispute that at the time of the accident the burglary had not yet been discovered, and defendant was at least 60 miles and one hour from the crime scene, had made a telephone call a half-hour earlier, and had been driving at a normal speed,” Chief Justice Tani Cantil-Sakauye wrote for a unanimous court.
“Given the evidence, there is a reasonable probability that a jury properly instructed … would have concluded that defendant had reached a place of temporary safety before the fatal act occurred and was not guilty of felony murder.”
Orange County prosecutors will now have to decide whether to retry Wilkins.
Deputy Atty. Gen. Steven T. Oetting complained the court had created a “new rule” that would reduce criminals’ culpability. He said Wilkins did not tie down the appliances after burglarizing a home under construction because he wanted to get away from the crime scene as fast as possible.
“He is not just some guy who just failed to secure his load,” Oetting said. “The reason he failed to secure his load is because of the burglary, and this ruling fails to take this into account.”
Richard A. Levy, who represented Wilkins on appeal, called the ruling a clarification of existing law and “absolutely the correct decision.”
ALSO:
Track coach impaled on fence near school, dies of wounds
Villaraigosa plans 'real close look' at mayoral runoff candidates
Woman found dead on beach in Newport identified as 20-year-old
-- Maura Dolan in San Francisco
Photo: Cole Allen Wilkins. Credit: Orange County district attorney's office |
/**
* Save user-assigned WOCE (data QC) flags.
*/
public class SaveEdits extends LASAction {
private static final long serialVersionUID = -2069025251560349247L;
private static Logger log = LoggerFactory.getLogger(SaveEdits.class.getName());
private static final String DATABASE_CONFIG = "DatabaseBackendConfig.xml";
// TODO: get the database name from the <database_access><db_name> field for this data collection
private static final String DATABASE_NAME = "SOCATFlags";
private static String ERROR = "error";
private static String EDITS = "edits";
private String socatQCVersion;
private DsgNcFileHandler dsgHandler;
private DatabaseRequestHandler databaseHandler;
private TreeSet<DashDataType<?>> dataTypesSet;
/**
* Creates with the SOCAT UploadDashboard DsgNcFileHandler and DatabaseRequestHandler
*
* @throws IllegalArgumentException
* if parameters are invalid
* @throws SQLException
* if one is thrown connecting to the database
* @throws LASException
* if unable to get the database parameters
*/
public SaveEdits() throws IllegalArgumentException, SQLException, LASException {
super();
log.debug("Initializing SaveEdits from database configuraton");
Element dbParams;
try {
LASDocument dbConfig = new LASDocument();
TemplateTool tempTool = new TemplateTool("database", DATABASE_CONFIG);
JDOMUtils.XML2JDOM(tempTool.getConfigFile(), dbConfig);
dbParams = dbConfig.getElementByXPath(
"/databases/database[@name='" + DATABASE_NAME + "']");
} catch ( Exception ex ) {
throw new LASException(
"Could not parse " + DATABASE_CONFIG + ": " + ex.toString());
}
if ( dbParams == null )
throw new LASException("No database definition found for database " +
DATABASE_NAME + " in " + DATABASE_CONFIG);
String databaseDriver = dbParams.getAttributeValue("driver");
log.debug("driver=" + databaseDriver);
String databaseUrl = dbParams.getAttributeValue("connectionURL");
log.debug("databaseUrl=" + databaseUrl);
String selectUsername = dbParams.getAttributeValue("user");
log.debug("selectUsername=" + selectUsername);
String selectPassword = dbParams.getAttributeValue("password");
// Logging this sets off security alarm bells... log.debug("selectPassword=" + selectPassword);
String updateUsername = dbParams.getAttributeValue("updateUser");
log.debug("updateUsername=" + updateUsername);
String updatePassword = dbParams.getAttributeValue("updatePassword");
// Logging this sets off security alarm bells... log.debug("updatePassword=" + updatePassword);
if ( (updateUsername != null) && (updatePassword != null) ) {
// The database URLs in the LAS config files do not have the jdbc: prefix
databaseHandler = new DatabaseRequestHandler(databaseDriver, "jdbc:" + databaseUrl,
selectUsername, selectPassword, updateUsername, updatePassword);
log.debug("database request handler configuration successful");
}
else {
databaseHandler = null;
log.debug("database request handler not created");
}
socatQCVersion = dbParams.getAttributeValue("socatQCVersion");
log.debug("socatQCVersion=" + socatQCVersion);
String dsgFileDir = dbParams.getAttributeValue("dsgFileDir");
log.debug("dsgFileDir=" + dsgFileDir);
String decDsgFileDir = dbParams.getAttributeValue("decDsgFileDir");
log.debug("decDsgFileDir=" + decDsgFileDir);
String erddapDsgFlag = dbParams.getAttributeValue("erddapDsgFlag");
log.debug("erddapDsgFlag=" + erddapDsgFlag);
String erddapDecDsgFlag = dbParams.getAttributeValue("erddapDecDsgFlag");
log.debug("erddapDecDsgFlag=" + erddapDecDsgFlag);
String dataTypesFilename = dbParams.getAttributeValue("addDataTypes");
log.debug("addDataTypes=" + dataTypesFilename);
String ferretConfigFilename = dbParams.getAttributeValue("ferretConfig");
log.debug("ferretConfig=" + ferretConfigFilename);
if ( (dsgFileDir != null) && (decDsgFileDir != null) &&
(erddapDsgFlag != null) && (erddapDecDsgFlag != null) &&
(ferretConfigFilename != null) ) {
// Actual metadata and data types not needed for just assigning WOCE flags, but
// data types needed for converting upper-cased names to actual variable names.
// FerretConfig needed for decimating the full-data DSG file after assigning WOCE flags
KnownDataTypes dataTypes = new KnownDataTypes();
dataTypes.addStandardTypesForDataFiles();
try {
Properties typeProps = new Properties();
FileInputStream input = new FileInputStream(dataTypesFilename);
try {
typeProps.load(input);
} finally {
input.close();
}
dataTypes.addTypesFromProperties(typeProps, DashDataType.Role.FILE_DATA, null);
} catch ( Exception ex ) {
log.debug("adding data types possible problem: " + ex.getMessage());
}
log.debug("dataTypes=" + dataTypes.toString());
dataTypesSet = dataTypes.getKnownTypesSet();
// Ferret configuration
FerretConfig ferretConf;
try {
InputStream stream = new FileInputStream(ferretConfigFilename);
try {
SAXBuilder sb = new SAXBuilder();
Document jdom = sb.build(stream);
ferretConf = new FerretConfig();
ferretConf.setRootElement(jdom.getRootElement().clone());
} finally {
stream.close();
}
} catch ( Exception ex ) {
throw new IllegalArgumentException("ferret configuration problem: " + ex.getMessage(), ex);
}
log.debug("ferretConfig=" + ferretConf.toString());
dsgHandler = new DsgNcFileHandler(dsgFileDir, decDsgFileDir, erddapDsgFlag, erddapDecDsgFlag,
ferretConf, null, dataTypes, null, null);
log.debug("DSG file handler configuration successful");
}
else {
dataTypesSet = null;
dsgHandler = null;
log.debug("DSG file handler not created");
}
}
@Override
public String execute() throws Exception {
// Make sure this is configured for setting WOCE flags
if ( (socatQCVersion == null) || (dsgHandler == null) || (databaseHandler == null) ) {
logerror(request, "LAS not configured to allow editing of WOCE flags", "Illegal action");
return ERROR;
}
// Parser to convert Ferret date strings into Date objects
SimpleDateFormat fullDateParser = new SimpleDateFormat("dd-MMM-yyyy HH:mm:ss");
fullDateParser.setTimeZone(TimeZone.getTimeZone("UTC"));
// Get the username of the reviewer assigning these WOCE flags
String username;
try {
log.debug("Assigning SaveEdits username");
username = request.getUserPrincipal().getName();
} catch ( Exception ex ) {
logerror(request, "Unable to get the username for WOCE flagging", ex);
return ERROR;
}
JsonStreamParser parser = new JsonStreamParser(request.getReader());
JsonObject message = (JsonObject) parser.next();
// LAS temporary DSG file to update
String tempname;
try {
tempname = message.get("temp_file").getAsString();
} catch ( Exception ex ) {
logerror(request, "Unable to get temp_file for WOCE flagging", ex);
return ERROR;
}
// WOCE flag comment
String comment;
try {
String encodedComment = message.get("comment").getAsString();
comment = new String(DatatypeConverter.parseHexBinary(encodedComment), "UTF-16");
} catch ( Exception ex ) {
logerror(request, "Unable to get the comment for WOCE flagging", ex);
return ERROR;
}
// List of data points getting the WOCE flag
JsonArray edits;
try {
edits = (JsonArray) message.get("edits");
if ( edits.size() < 1 )
throw new IllegalArgumentException("No edits given");
} catch ( Exception ex ) {
logerror(request, "Unable to get the edits for WOCE flagging", ex);
return ERROR;
}
// Create the list of (incomplete) data locations for the WOCE event
String expocode = null;
String woceFlag = null;
String woceName = null;
String dataName = null;
ArrayList<DataLocation> locations = new ArrayList<DataLocation>(edits.size());
try {
for (JsonElement rowValues : edits) {
DataLocation datumLoc = new DataLocation();
for (Entry<String,JsonElement> rowEntry : ((JsonObject) rowValues).entrySet()) {
// Neither the name nor the value should be null.
// Because of going through Ferret, everything will be uppercase
// but just to be sure....
String name = rowEntry.getKey().trim().toUpperCase(Locale.ENGLISH);
String value = rowEntry.getValue().getAsString().trim().toUpperCase(Locale.ENGLISH);
if ( name.equals("EXPOCODE") || name.equals("EXPOCODE_") ) {
if ( expocode == null )
expocode = value;
else if ( !expocode.equals(value) )
throw new IllegalArgumentException("Mismatch of expocodes; " +
"previous: '" + expocode + "'; current: '" + value + "'");
}
else if ( name.equals("DATE") ) {
Date dataDate = fullDateParser.parse(value);
datumLoc.setDataDate(dataDate);
}
else if ( name.equals("LONGITUDE") ) {
Double longitude = Double.parseDouble(value);
datumLoc.setLongitude(longitude);
}
else if ( name.equals("LATITUDE") ) {
Double latitude = Double.parseDouble(value);
datumLoc.setLatitude(latitude);
}
else if ( name.startsWith("WOCE_") ) {
// Name and value of the WOCE flag to assign
if ( woceName == null )
woceName = name;
else if ( !woceName.equals(name) )
throw new IllegalArgumentException("Mismatch of WOCE names; " +
"previous: '" + woceName + "'; current: '" + name + "'");
if ( value.length() != 1 )
throw new IllegalArgumentException("Invalid WOCE flag value '" + value + "'");
if ( woceFlag == null )
woceFlag = value;
else if ( !woceFlag.equals(value) )
throw new IllegalArgumentException("Mismatch of WOCE flags; " +
"previous: '" + woceFlag + "'; current: '" + value + "'");
}
else {
// Assume it is the data variable name.
// Note that WOCE from just lat/lon/date plots will not have this column
if ( dataName == null )
dataName = name;
else if ( !dataName.equals(name) )
throw new IllegalArgumentException("Mismatch of data names; " +
"previous: '" + dataName + "'; current: '" + name + "'");
Double dataValue = Double.parseDouble(value);
datumLoc.setDataValue(dataValue);
}
}
locations.add(datumLoc);
}
} catch ( Exception ex ) {
logerror(request, "Problems interpreting the WOCE flags", ex);
if ( expocode != null )
logerror(request, "expocode = " + expocode, "");
if ( dataName != null )
logerror(request, "dataName = " + dataName, "");
if ( woceName != null )
logerror(request, "woceName = " + woceName, "");
if ( woceFlag != null )
logerror(request, "woceFlag = " + woceFlag, "");
return ERROR;
}
if ( expocode == null ) {
logerror(request, "No EXPOCODE given in the WOCE flags", "");
return ERROR;
}
if ( woceName == null ) {
logerror(request, "No WOCE flag name given in the WOCE flags", "");
return ERROR;
}
else {
String varName = null;
for (DashDataType dtype : dataTypesSet) {
if ( dtype.typeNameEquals(woceName) ) {
varName = dtype.getVarName();
break;
}
}
if ( varName == null ) {
logerror(request, "Unknown WOCE flag name '" + woceName + "'", "");
return ERROR;
}
woceName = varName;
}
if ( woceFlag == null ) {
logerror(request, "No WOCE flag value given in the WOCE flags", "");
return ERROR;
}
if ( dataName != null ) {
// data variable name probably upper-cased, so get actual-cased name
String varName = null;
for (DashDataType dtype : dataTypesSet) {
if ( dtype.typeNameEquals(dataName) ) {
varName = dtype.getVarName();
break;
}
}
if ( varName == null ) {
logerror(request, "Unknown data variable '" + dataName + "'", "");
return ERROR;
}
dataName = varName;
}
// Create the WOCE event without row numbers
DataQCEvent woceEvent = new DataQCEvent();
woceEvent.setVersion(socatQCVersion);
woceEvent.setUsername(username);
woceEvent.setComment(comment);
woceEvent.setDatasetId(expocode);
woceEvent.setFlagName(woceName);
woceEvent.setVarName(dataName);
woceEvent.setFlagValue(woceFlag);
woceEvent.setFlagDate(new Date());
woceEvent.setLocations(locations);
// Update the full-data DSG file with the WOCE flags, filling in the missing data row numbers,
// and regenerate the decimated DSG file from the full-data DSG fiile.
ArrayList<DataLocation> unidentified;
try {
unidentified = dsgHandler.updateDataQCFlags(woceEvent, true);
log.debug("full-data DSG file updated");
} catch ( Exception ex ) {
logerror(request, "Unable to update the full-data DSG file with the WOCE flags", ex);
logerror(request, "expocode = " + expocode +
"; dataName = " + dataName +
"; woceFlag = " + woceFlag, "");
return ERROR;
}
if ( !unidentified.isEmpty() ) {
logerror(request, "Unable to identify the following data points: ", "");
for (DataLocation loc : unidentified) {
logerror(request, " " + loc.toString(), "");
}
return ERROR;
}
try {
DsgNcFile tempFile = new DsgNcFile(tempname);
// Ignore any unidentified data points - temp file may not be complete
tempFile.updateDataQCFlags(woceEvent, false);
log.debug("temporary DSG file updated");
} catch ( Exception ex ) {
logerror(request, "Unable to update the temporary DSG file with the WOCE flags", ex);
logerror(request, "expocode = " + expocode +
"; dataName = " + dataName +
"; woceFlag = " + woceFlag, "");
return ERROR;
}
// Save the WOCE event with the row numbers to the database
try {
databaseHandler.addDataQCEvent(Collections.singletonList(woceEvent));
log.debug("WOCE event added to the database");
} catch ( Exception ex ) {
logerror(request, "Unable to record the WOCE event in the database", ex);
logerror(request, "expocode = " + expocode +
"; dataName = " + dataName +
"; woceFlag = " + woceFlag, "");
return ERROR;
}
log.info("Assigned WOCE event (also updated " + tempname + "): \n" +
woceEvent.toString());
request.setAttribute("expocode", expocode);
return EDITS;
}
} |
/**
* Progresses all issues matching the JQL search, using the given workflow action. Optionally
* adds a comment to the issue(s) at the same time.
*
* @param jqlSearch the query
* @param workflowActionName the workflowActionName
* @param comment the comment
* @param console the console
* @throws TimeoutException TimeoutException if too long
*/
public boolean progressMatchingIssues(String jqlSearch, String workflowActionName, String comment, PrintStream console) throws TimeoutException {
JiraSession session = getSession();
if (session == null) {
LOGGER.warning("JIRA session could not be established");
console.println(Messages.FailedToConnect());
return false;
}
boolean success = true;
List<Issue> issues = session.getIssuesFromJqlSearch(jqlSearch);
if (isEmpty(workflowActionName)) {
console.println("[JIRA] No workflow action was specified, " +
"thus no status update will be made for any of the matching issues.");
}
for (Issue issue : issues) {
String issueKey = issue.getKey();
if (isNotEmpty(comment)) {
session.addComment(issueKey, comment, null, null);
}
if (isEmpty(workflowActionName)) {
continue;
}
Integer actionId = session.getActionIdForIssue(issueKey, workflowActionName);
if (actionId == null) {
LOGGER.fine(String.format("Invalid workflow action %s for issue %s; issue status = %s",
workflowActionName, issueKey, issue.getStatus()));
console.println(Messages.JiraIssueUpdateBuilder_UnknownWorkflowAction(issueKey, workflowActionName));
success = false;
continue;
}
String newStatus = session.progressWorkflowAction(issueKey, actionId);
console.println(String.format("[JIRA] Issue %s transitioned to \"%s\" due to action \"%s\".",
issueKey, newStatus, workflowActionName));
}
return success;
} |
<filename>fitlins/generate_dset.py
import json
from itertools import product
from tempfile import mkdtemp
from pathlib import Path
import pandas as pd
import numpy as np
from nilearn.glm.first_level.hemodynamic_models import compute_regressor
import nibabel as nib
import bids
from bids.layout.writing import build_path
def write_metadata(filepath, metadata):
filepath.ensure()
with open(str(filepath), 'w') as meta_file:
json.dump(metadata, meta_file)
class RegressorFileCreator():
"""Generator for _regressors files in bids derivatives dataset"""
# pattern for file
PATTERN = (
"sub-{subject}[/ses-{session}]/{datatype<func>|func}/"
"sub-{subject}[_ses-{session}]_task-{task}[_acq-{acquisition}]"
"[_ce-{ceagent}][_dir-{direction}][_rec-{reconstruction}][_run-{run}]"
"[_echo-{echo}][_space-{space}][_cohort-{cohort}][_desc-{desc}]_"
"{suffix<timeseries|regressors>|timeseries}{extension<.json|.tsv>|.tsv}"
)
# common file parameters
FILE_PARAMS = {"suffix": "regressors", "datatype": "func", "desc": "confounds"}
def __init__(self, base_dir, fname_params, regr_names, n_tp, metadata=None):
self.base_dir = base_dir
self.metadata = metadata
fname_params = {**fname_params, **self.FILE_PARAMS, "extension": ".tsv"}
meta_params = {**fname_params, **self.FILE_PARAMS, "extension": ".json"}
self.init_data(regr_names, n_tp)
self.create_fname(fname_params, meta_params)
def init_data(self, regr_names, n_tp):
"""create the regressor data"""
self.noise_df = pd.DataFrame(
{name: np.random.random(n_tp) for name in regr_names}
)
def create_fname(self, fname_params, meta_params):
"""create the bids derivatives regressor file path and path names"""
self.fname = self.base_dir / build_path(fname_params, self.PATTERN)
self.meta_fname = self.base_dir / build_path(meta_params, self.PATTERN)
def write_file(self):
"""write the data to files"""
self.fname.dirpath().ensure_dir()
self.noise_df.to_csv(self.fname, sep='\t', index=False)
if self.metadata:
write_metadata(self.meta_fname, self.metadata)
return self.fname, self.meta_fname
return self.fname
class DerivFuncFileCreator():
PATTERN = (
"sub-{subject}[/ses-{session}]/{datatype<func>|func}/"
"sub-{subject}[_ses-{session}]_task-{task}[_acq-{acquisition}]"
"[_ce-{ceagent}][_dir-{direction}][_rec-{reconstruction}][_run-{run}]"
"[_echo-{echo}][_space-{space}][_cohort-{cohort}][_res-{resolution}]"
"[_desc-{desc}]_{suffix<bold|cbv|phase|sbref|boldref|dseg>}"
"{extension<.nii|.nii.gz|.json>|.nii.gz}"
)
FILE_PARAMS = {"suffix": "bold", "space": "T1w", "desc": "preproc"}
def __init__(
self, base_dir, fname_params, events_df, trial_type_weights, noise_df, n_tp, cnr, metadata
):
self.base_dir = base_dir
self.metadata = metadata
fname_params = {**fname_params, **self.FILE_PARAMS, "extension": ".nii.gz"}
meta_params = {**fname_params, **self.FILE_PARAMS, "extension": ".json"}
self.init_data(events_df, trial_type_weights, noise_df, n_tp, cnr, metadata)
self.create_fname(fname_params, meta_params)
def _create_signal(self, tr, n_tp, events_df, trial_type_weights):
frame_times = np.arange(0, int(n_tp * tr), step=int(tr))
signal = np.zeros(frame_times.shape)
trial_types = events_df['trial_type'].unique()
for condition, weight in zip(trial_types, trial_type_weights):
exp_condition = events_df.query(
f"trial_type == '{condition}'"
)[['onset', 'duration']].values.T
exp_condition = np.vstack([exp_condition, np.repeat(weight, exp_condition.shape[1])])
signal += compute_regressor(
exp_condition, "glover", frame_times, con_id=condition)[0].squeeze()
return signal
def _aggregate_noise(self, noise_df):
return noise_df.values.mean(axis=1)
def _create_nii(self, timeseries):
brain_data = np.random.random((9, 9, 9, len(timeseries)))
brain_data[2:6, 2:6, 2:6, :] += timeseries
return nib.Nifti1Image(brain_data, affine=np.eye(4))
def init_data(self, events_df, trial_type_weights, noise_df, n_tp, cnr, metadata):
tr = metadata['RepetitionTime']
signal = self._create_signal(tr, n_tp, events_df, trial_type_weights)
noise = self._aggregate_noise(noise_df)
contrast = signal.max()
signal_scaling_factor = contrast * cnr * noise.std()
timeseries = (signal * signal_scaling_factor) + noise
scaled_timeseries = (timeseries * 10) + 100
self.data = self._create_nii(scaled_timeseries)
def create_fname(self, fname_params, meta_params):
self.fname = self.base_dir / build_path(fname_params, self.PATTERN)
self.meta_fname = self.base_dir / build_path(meta_params, self.PATTERN)
def write_file(self):
self.fname.dirpath().ensure_dir()
self.data.to_filename(self.fname.strpath)
if self.metadata:
write_metadata(self.meta_fname, self.metadata)
return self.fname, self.meta_fname
return self.fname
class DerivMaskFileCreator():
PATTERN = (
"sub-{subject}[/ses-{session}]/{datatype<func>|func}/"
"sub-{subject}[_ses-{session}]_task-{task}[_acq-{acquisition}]"
"[_ce-{ceagent}][_dir-{direction}][_rec-{reconstruction}][_run-{run}]"
"[_echo-{echo}][_space-{space}][_cohort-{cohort}][_res-{resolution}]"
"_desc-{desc}_{suffix<mask>|mask}{extension<.nii|.nii.gz|.json>|.nii.gz}"
)
FILE_PARAMS = {"suffix": "mask", "desc": "brain", "space": "T1w"}
def __init__(
self, base_dir, fname_params, func_img, metadata=None
):
self.base_dir = base_dir
self.metadata = metadata
fname_params = {**fname_params, **self.FILE_PARAMS, "extension": ".nii.gz"}
meta_params = {**fname_params, **self.FILE_PARAMS, "extension": ".json"}
self.init_data(func_img)
self.create_fname(fname_params, meta_params)
def init_data(self, func_img):
mask_data = (func_img.get_fdata()[:, :, :, 0] > 10).astype(np.int32)
self.data = nib.Nifti1Image(mask_data, np.eye(4))
def create_fname(self, fname_params, meta_params):
self.fname = self.base_dir / build_path(fname_params, self.PATTERN)
self.meta_fname = self.base_dir / build_path(meta_params, self.PATTERN)
def write_file(self):
self.fname.dirpath().ensure_dir()
self.data.to_filename(self.fname.strpath)
if self.metadata:
write_metadata(self.meta_fname, self.metadata)
return self.fname, self.meta_fname
return self.fname
class FuncFileCreator():
PATTERN = (
"sub-{subject}[/ses-{session}]/{datatype<func>|func}/"
"sub-{subject}[_ses-{session}]_task-{task}[_acq-{acquisition}]"
"[_ce-{ceagent}][_dir-{direction}][_rec-{reconstruction}][_run-{run}]"
"[_echo-{echo}]_{suffix<bold|cbv|phase|sbref>}{extension<.nii|.nii.gz|.json>|.nii.gz}"
)
FILE_PARAMS = {"suffix": "bold"}
def __init__(
self, base_dir, fname_params, events_df, trial_type_weights, noise_df, n_tp, cnr, metadata
):
self.base_dir = base_dir
self.metadata = metadata
fname_params = {**fname_params, **self.FILE_PARAMS, "extension": ".nii.gz"}
meta_params = {**fname_params, **self.FILE_PARAMS, "extension": ".json"}
self.init_data(events_df, trial_type_weights, noise_df, n_tp, cnr, metadata)
self.create_fname(fname_params, meta_params)
def _create_signal(self, tr, n_tp, events_df, trial_type_weights):
frame_times = np.arange(0, int(n_tp * tr), step=int(tr))
signal = np.zeros(frame_times.shape)
trial_types = events_df['trial_type'].unique()
for condition, weight in zip(trial_types, trial_type_weights):
exp_condition = events_df.query(
f"trial_type == '{condition}'"
)[['onset', 'duration']].values.T
exp_condition = np.vstack([exp_condition, np.repeat(weight, exp_condition.shape[1])])
signal += compute_regressor(
exp_condition, "glover", frame_times, con_id=condition)[0].squeeze()
return signal
def _aggregate_noise(self, noise_df):
return noise_df.values.mean(axis=1)
def _create_nii(self, timeseries):
brain_data = np.random.random((9, 9, 9, len(timeseries)))
brain_data[2:6, 2:6, 2:6, :] += timeseries
return nib.Nifti1Image(brain_data, affine=np.eye(4))
def init_data(self, events_df, trial_type_weights, noise_df, n_tp, cnr, metadata):
tr = metadata['RepetitionTime']
signal = self._create_signal(tr, n_tp, events_df, trial_type_weights)
noise = self._aggregate_noise(noise_df)
contrast = signal.max()
signal_scaling_factor = contrast * cnr * noise.std()
timeseries = (signal * signal_scaling_factor) + noise
scaled_timeseries = (timeseries * 10) + 100
self.data = self._create_nii(scaled_timeseries)
def create_fname(self, fname_params, meta_params):
self.fname = self.base_dir / build_path(fname_params, self.PATTERN)
self.meta_fname = self.base_dir / build_path(meta_params, self.PATTERN)
def write_file(self):
self.fname.dirpath().ensure_dir()
self.data.to_filename(self.fname.strpath)
if self.metadata:
write_metadata(self.meta_fname, self.metadata)
return self.fname, self.meta_fname
return self.fname
class EventsFileCreator():
PATTERN = (
"sub-{subject}[/ses-{session}]/[{datatype<func|meg|beh>|func}/]"
"sub-{subject}[_ses-{session}]_task-{task}[_acq-{acquisition}]"
"[_rec-{reconstruction}][_run-{run}][_echo-{echo}][_recording-{recording}]"
"_{suffix<events>}{extension<.tsv|.json>|.tsv}"
)
FILE_PARAMS = {"suffix": "events", "datatype": "func"}
def __init__(self, base_dir, fname_params, n_events, trial_types, event_duration,
inter_trial_interval, metadata=None):
self.base_dir = base_dir
self.metadata = metadata
fname_params = {**fname_params, **self.FILE_PARAMS, "extension": ".tsv"}
meta_params = {**fname_params, **self.FILE_PARAMS, "extension": ".json"}
self.init_data(n_events, trial_types, event_duration, inter_trial_interval)
self.create_fname(fname_params, meta_params)
def init_data(self, n_events, trial_types, event_duration, inter_trial_interval):
events_dict = {}
n_trial_types = len(trial_types)
experiment_duration = int(n_trial_types * n_events * inter_trial_interval)
events_dict['onset'] = np.arange(0, experiment_duration, inter_trial_interval)
events_dict['trial_type'] = trial_types * n_events
events_dict['duration'] = [event_duration] * n_trial_types * n_events
self.experiment_duration = experiment_duration
self.events_df = pd.DataFrame(events_dict)
def create_fname(self, fname_params, meta_params):
self.fname = self.base_dir / build_path(fname_params, self.PATTERN)
self.meta_fname = self.base_dir / build_path(meta_params, self.PATTERN)
def write_file(self):
self.fname.dirpath().ensure_dir()
self.events_df.to_csv(self.fname, sep='\t', index=False)
if self.metadata:
write_metadata(self.meta_fname, self.metadata)
return self.fname, self.meta_fname
return self.fname
class DummyDerivatives():
"""Create a minimal BIDS+Derivatives dataset for testing"""
DERIVATIVES_DICT = {
"Name": "fMRIPrep - fMRI PREProcessing workflow",
"BIDSVersion": "1.4.1",
"PipelineDescription": {
"Name": "fMRIPrep",
"Version": "1.5.0rc2+14.gf673eaf5",
"CodeURL": "https://github.com/nipreps/fmriprep/archive/1.5.0.tar.gz"
},
"CodeURL": "https://github.com/nipreps/fmriprep",
"HowToAcknowledge": "Please cite our paper (https://doi.org/10.1038/s41592-018-0235-4)",
"SourceDatasetsURLs": [
"https://doi.org/"
],
"License": ""
}
BIDS_DICT = {
"Name": "ice cream and cake",
"BIDSVersion": "1.4.1",
}
def __init__(
self,
base_dir=None,
database_path=None,
participant_labels=None,
session_labels=None,
task_labels=None,
run_labels=None,
trial_types=None,
trial_type_weights=None,
n_events=None,
event_duration=None,
inter_trial_interval=None,
cnr=None,
regr_names=None,
func_metadata=None,
):
self.base_dir = base_dir or Path(mkdtemp(suffix="bids"))
self.database_path = database_path or self.base_dir.dirpath() / 'dbcache'
self.participant_labels = participant_labels or ["bert", "ernie", "gritty"]
self.session_labels = session_labels or ["breakfast", "lunch"]
self.task_labels = task_labels or ["eating"]
self.run_labels = run_labels or ["01", "02"]
self.trial_types = trial_types or ["ice_cream", "cake"]
self.trial_type_weights = trial_type_weights or list(range(1, len(self.trial_types)))
self.n_events = n_events or 15
self.event_duration = event_duration or 1
self.inter_trial_interval = inter_trial_interval or 20
self.cnr = cnr or 2
self.regr_names = regr_names or ["food_sweats", "sugar_jitters"]
self.func_metadata = func_metadata or {"RepetitionTime": 2.0, "SkullStripped": False}
self.deriv_dir = self.base_dir.ensure('derivatives', 'fmriprep', dir=True)
self.create_dataset_descriptions()
self.write_bids_derivatives_dataset()
self.create_layout()
def create_dataset_descriptions(self):
# dataset_description.json files are needed in both bids and derivatives
bids_dataset_json = self.base_dir.ensure("dataset_description.json")
with open(str(bids_dataset_json), 'w') as dj:
json.dump(self.BIDS_DICT, dj)
deriv_dataset_json = self.deriv_dir.ensure("dataset_description.json")
with open(str(deriv_dataset_json), 'w') as dj:
json.dump(self.DERIVATIVES_DICT, dj)
def write_bids_derivatives_dataset(self):
# generate all combinations of relevant file parameters
unique_scans = product(
self.participant_labels,
self.session_labels or (None,),
self.task_labels or (None,),
self.run_labels or (None,),
)
param_order = ['subject', 'session', 'task', 'run']
for scan_params in unique_scans:
file_params = {k: v for k, v in zip(param_order, scan_params)}
# create events file
events = EventsFileCreator(
self.base_dir, file_params, self.n_events, self.trial_types,
self.event_duration, self.inter_trial_interval
)
events.write_file()
# calculate number of timepoints
n_tp = int(events.experiment_duration // self.func_metadata["RepetitionTime"])
# create noise file
noise = RegressorFileCreator(self.deriv_dir, file_params, self.regr_names, n_tp)
noise.write_file()
# create bids func file
FuncFileCreator(
self.base_dir, file_params, events.events_df,
self.trial_type_weights, noise.noise_df, n_tp,
self.cnr, self.func_metadata,
).write_file()
# create deriv func file
deriv_func = DerivFuncFileCreator(
self.deriv_dir, file_params, events.events_df,
self.trial_type_weights, noise.noise_df, n_tp,
self.cnr, self.func_metadata,
)
deriv_func.write_file()
# create mask for deriv_func
DerivMaskFileCreator(self.deriv_dir, file_params, deriv_func.data).write_file()
def create_layout(self):
# create bids layout
self.layout = bids.BIDSLayout(
self.base_dir, derivatives=True, database_path=self.database_path,
reset_database=True)
|
// newReconciler returns a new reconcile.Reconciler
func newReconciler(mgr manager.Manager, configInfo *config.ConfigurationInfo, volumeManager volumes.Manager, recorder record.EventRecorder) reconcile.Reconciler {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
ctx = logger.NewContextWithLogger(ctx)
return &ReconcileCnsNodeVMAttachment{client: mgr.GetClient(), scheme: mgr.GetScheme(), configInfo: configInfo, volumeManager: volumeManager, nodeManager: cnsnode.GetManager(ctx), recorder: recorder}
} |
def validate_song(song):
attrs = ["default_arrangement", "composer", "copyright", "youtube", "ccli"]
for a in attrs:
if getattr(song, a) in [None, "None"]:
setattr(song, a, "")
return song |
13277
|
Cultural Conceptualisations in English Words: A Study of Aboriginal Children in Perth This study explored conceptualisations that two groups of Aboriginal and Anglo- Australian students attending metropolitan schools in Western Australia instantiate through the use of English words. At the time of the study, many educators believed that both these groups of students spoke the same dialect. A group of 30Aboriginal primary school students and a matching group of Anglo-Australian students participated in the study. Thirty-two English words were used as prompts to evoke schemas and categories in participants. The responses were then interpreted using an ethnographic approach toward the identification of cultural conceptualisations. These responses were compared within and between the two groups. The analysis of the data provided evidence for the operation of two distinct, but overlapping, conceptual systems among the two cultural groups studied. The discrepancies between the two systems largely appear to be rooted in the cultural systems that characterise each group while the overlap between the two conceptual systems appears to arise from several phenomena such as experience in similar physical environments. One of the implications of the findings is that a critical defining feature of some varieties of a language may be their conceptual basis, rather than so much their grammatical and/or phonological features. This observation calls for further exploration and perhaps a revisiting of the notion of dialect. |
CINCINNATI – The City of Cincinnati is still exploring the creation of an open container district at The Banks downtown, but City Manager Harry Black said this week nothing is "imminent" in that process – as the buy-in needed to move that initiative forward appears to be lacking.
The city is allowed two such outdoor refreshment areas, or ORAs, after Ohio lawmakers passed a much-publicized bill last year that would allow for cities of a certain size to have designated areas for people to carry open containers and drink alcohol in public.
The idea is to mirror dynamic entertainment districts like Bourbon Street in New Orleans, Fourth Street Live! in Louisville, and Beale Street in Memphis, Tennessee. Patrons wouldn't be allowed to bring in their own drinks but could exit restaurants or bars with their booze to stroll within designated pedestrian zones.
Cities like Middletown and Toledo have jumped on board in the months since, but it appears Cincinnati is no closer to its own ORA than it was at this point last year.
Initially, supporters had hoped to implement an ORA ahead of last summer's Major League Baseball All-Star Game. And Black, last spring, said the city wouldn't waste any time moving forward.
But as of this week, there's still no timeline in place. And in a memo issued Tuesday by Black following an interview request by WCPO.com, he said, "nothing is imminent."
City of Cincinnati officials still consider The Banks a "prime location" for an ORA – given its riverfront access, nightlife and growing residential population. And the city has ID's two adjacent so-called community entertainment districts there totaling 199 acres. The ORA could overlay both CEDs combined.
Before moving forward, Black said, businesses, residents and other stakeholders must develop a plan they're "satisfied with."
"We've not gotten to that point," said Rocky Merz, city spokesman. "There's been discussions with the business owners and folks down at The Banks, but in order to do this, we need a plan that covers all aspect of the district – litter, safety, security, boundaries, traffic, all that stuff."
Getting that right, the memo says, requires "serious planning and collaboration."
"They're trying to work something out, but they also don’t want to force something on the business owners there," Merz said. "We don't want to force a square peg in a round hole.
"There's nothing imminent," he added, "but the door is not closed."
When – or if – an ORA is agreed upon, city staff will send an ordinance to City Council for consideration. Following that submission, it will be introduced and available for public comment prior to a final vote.
If council approves, the district is formally designated, and information is forwarded to the State of Ohio Division of Liquor Control for issuance of necessary permits.
Representatives from other parts of Cincinnati have also expressed an initial interest in their own ORAs, but, Black says, none have actively pursued the issue with the city at this time.
While there's no timeline in place, all is not lost. Special permits can still be issued by the city to allow for open container areas at The Banks for events like Opening Day 2016, for example. Such an area was in place at The Banks for the entirely of the All-Star Game.
"The special event mechanism is temporary," Black said, "while the creation of an ORA is lasting…The goal is to create a plan that is sustainable and successful so residents, businesses and a growing number of visitors may enjoy an ORA for years to come." |
/*
*-----------------------------------------------------------------------------
* Filename: dma-test.c
*-----------------------------------------------------------------------------
*-----------------------------------------------------------------------------
* Description:
* This is a demo program for showing dma-buffer way of getting frames from
* IPU driver, color converting them from UYVY/RGB888/RGB565 to ARGB and then
* displaying them on the screen using Wayland.
*-----------------------------------------------------------------------------
*/
#include <assert.h>
#include <errno.h>
#include <fcntl.h>
#include <math.h>
#include <poll.h>
#include <signal.h>
#include <stdint.h>
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#include <unistd.h>
#include <pthread.h>
#include <GLES2/gl2.h>
#include <GLES2/gl2ext.h>
#include <EGL/egl.h>
#include <EGL/eglext.h>
#include <linux/input.h>
#include <time.h>
#include <sys/ioctl.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <wayland-egl.h>
#include <wayland-client.h>
#include <wayland-cursor.h>
#include <drm/drm.h>
#include <drm/drm_mode.h>
#include <drm/drm_fourcc.h>
#include <xf86drm.h>
#include <xf86drmMode.h>
#include <linux/videodev2.h>
#include <linux/v4l2-mediabus.h>
#include <linux/media.h>
#include "mediactl.h"
#include "v4l2subdev.h"
#include <pthread.h>
#include "ias-shell-client-protocol.h"
#include "ivi-application-client-protocol.h"
#include "wayland-drm-client-protocol.h"
/* For GEM */
#include <libdrm/intel_bufmgr.h>
#include <xf86drm.h>
#define BATCH_SIZE 0x80000
#define TARGET_NUM_SECONDS 5
#define BUFFER_COUNT 4
/* UYVY */
static const char *frag_shader_text_UYVY =
"uniform sampler2D u_texture_top;"\
"uniform sampler2D u_texture_bottom;"\
"uniform bool swap_rb;"\
"uniform bool interlaced;"\
"varying highp vec2 texcoord;"\
"varying mediump vec2 texsize;"\
"void main(void) {"\
" mediump float y, u, v, tmp;"\
" mediump vec4 resultcolor;"\
" mediump vec4 raw;"\
" if (interlaced && fract(texcoord.y * texsize.y) < 0.5) {"\
" raw = texture2D(u_texture_bottom, texcoord);"\
" } else {"\
" raw = texture2D(u_texture_top, texcoord);"\
" }"\
" if (fract(texcoord.x * texsize.x) < 0.5)"\
" raw.a = raw.g;"\
" u = raw.b-0.5;"\
" v = raw.r-0.5;"\
" if (swap_rb) {"\
" tmp = u;"\
" u = v;"\
" v = tmp;"\
" }"\
" y = 1.1643*(raw.a-0.0625);"\
" resultcolor.r = (y+1.5958*(v));"\
" resultcolor.g = (y-0.39173*(u)-0.81290*(v));"\
" resultcolor.b = (y+2.017*(u));"\
" resultcolor.a = 1.0;"\
" gl_FragColor=resultcolor;"\
"}";
/* YUYV */
static const char *frag_shader_text_YUYV =
"uniform sampler2D u_texture_top;"\
"uniform sampler2D u_texture_bottom;"\
"uniform bool swap_rb;"\
"uniform bool interlaced;"\
"varying highp vec2 texcoord;"\
"varying mediump vec2 texsize;"\
"void main(void) {"\
" mediump float y, u, v, tmp;"\
" mediump vec4 resultcolor;"\
" mediump vec4 raw;"\
" if((fract(texcoord.y * texsize.y) < 0.5) && interlaced) {"\
" raw = texture2D(u_texture_bottom, texcoord);"\
" } else {"\
" raw = texture2D(u_texture_top, texcoord);"\
" }"\
" if (fract(texcoord.x * texsize.x) < 0.5)"\
" raw.b = raw.r;"\
" u = raw.g-0.5;"\
" v = raw.a-0.5;"\
" y = 1.1643*(raw.b-0.0625);"\
" resultcolor.r = (y+1.5958*(v));"\
" resultcolor.g = (y-0.39173*(u)-0.81290*(v));"\
" resultcolor.b = (y+2.017*(u));"\
" resultcolor.a = 1.0;"\
" gl_FragColor=resultcolor;"\
"}";
/* RGB565 and RGB888 */
static const char *frag_shader_text_RGB =
"uniform sampler2D u_texture_top;"\
"uniform sampler2D u_texture_bottom;"\
"uniform bool rgb565;"\
"uniform bool swap_rb;"\
"uniform bool interlaced;"\
"varying highp vec2 texcoord;"\
"varying mediump vec2 texsize;"\
"void main(void) {"\
" highp vec4 resultcolor;"\
" highp vec4 raw;"\
" if (interlaced && fract(texcoord.y * texsize.y) < 0.5)"\
" raw = texture2D(u_texture_bottom, texcoord);"\
" else"\
" raw = texture2D(u_texture_top, texcoord);"\
" if(rgb565) raw *= vec4(255.0/32.0, 255.0/64.0, 255.0/32.0, 1.0);"\
" if (swap_rb) resultcolor.rgb = raw.bgr;"\
" else resultcolor.rgb = raw.rgb;"\
" resultcolor.a = 1.0;"\
" gl_FragColor = resultcolor;"\
"}";
/**
* @brief vertex shader for displaying the texture
*/
static const char *vert_shader_text =
"varying highp vec2 texcoord; "\
"varying mediump vec2 texsize; "\
"attribute vec4 pos; "\
"attribute highp vec2 itexcoord; "\
"uniform mat4 modelviewProjection; "\
"uniform mediump vec2 u_texsize; "\
"void main(void) "\
"{ "\
" texcoord = itexcoord; "\
" texsize = u_texsize; "\
" gl_Position = modelviewProjection * pos; "\
"}";
static struct timeval *curr_time, *prev_time;
struct window;
PFNEGLCREATEIMAGEKHRPROC eglCreateImageKHR;
PFNEGLDESTROYIMAGEKHRPROC eglDestroyImageKHR;
PFNGLEGLIMAGETARGETTEXTURE2DOESPROC glEGLImageTargetTexture2DOES;
PFNGLPROGRAMBINARYOESPROC glProgramBinaryOES = NULL;
PFNGLGETPROGRAMBINARYOESPROC glGetProgramBinaryOES = NULL;
#define ARRAY_SIZE(a) (sizeof(a)/sizeof((a)[0]))
#define OPT_STRIDE 263
#define OPT_BUFFER_SIZE 268
#define _ISP_MODE_PREVIEW 0x8000
#define _ISP_MODE_STILL 0x2000
#define _ISP_MODE_VIDEO 0x4000
#define CLEAR(x) memset(&(x), 0, sizeof(x))
#define ERRSTR strerror(errno)
#define BYE_ON(cond, ...) \
do { \
if (cond) { \
int errsv = errno; \
fprintf(stderr, "ERROR(%s:%d) : ", \
__FILE__, __LINE__); \
errno = errsv; \
fprintf(stderr, __VA_ARGS__); \
abort(); \
} \
} while(0)
static inline int warn(const char *file, int line, const char *fmt, ...)
{
int errsv = errno;
va_list va;
va_start(va, fmt);
fprintf(stderr, "WARN(%s:%d): ", file, line);
vfprintf(stderr, fmt, va);
va_end(va);
errno = errsv;
return 1;
}
#define WARN_ON(cond, ...) \
((cond) ? warn(__FILE__, __LINE__, __VA_ARGS__) : 0)
enum render_type {
RENDER_TYPE_WL,
RENDER_TYPE_GL,
RENDER_TYPE_GL_DMA,
};
struct setup {
char video[32];
unsigned int iw, ih, original_iw;
unsigned int ow, oh;
unsigned int use_wh : 1;
unsigned int in_fourcc;
unsigned int buffer_count;
unsigned int port;
unsigned int fullscreen;
unsigned int exporter;
unsigned int interlaced;
enum render_type render_type;
unsigned int frames_count;
unsigned int loops_count;
unsigned int skip_media_controller_setup;
unsigned int mplane_type;
};
struct v4l2_device {
const char *devname;
int fd;
struct v4l2_pix_format format;
int is_exporter;
enum v4l2_buf_type type;
unsigned char num_planes;
struct v4l2_plane_pix_format plane_fmt[VIDEO_MAX_PLANES];
void *pattern[VIDEO_MAX_PLANES];
unsigned int patternsize[VIDEO_MAX_PLANES];
};
enum field_type {
FIELD_TYPE_NONE,
FIELD_TYPE_TOP,
FIELD_TYPE_BOTTOM
};
static struct {
enum v4l2_buf_type type;
bool supported;
const char *name;
const char *string;
} buf_types[] = {
{ V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE, 1, "Video capture mplanes", "capture-mplane", },
{ V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE, 1, "Video output", "output-mplane", },
{ V4L2_BUF_TYPE_VIDEO_CAPTURE, 1, "Video capture", "capture", },
{ V4L2_BUF_TYPE_VIDEO_OUTPUT, 1, "Video output mplanes", "output", },
{ V4L2_BUF_TYPE_VIDEO_OVERLAY, 0, "Video overlay", "overlay" },
};
struct buffer {
drm_intel_bo *bo;
unsigned int index;
unsigned int fb_handle;
enum field_type field_type;
int dbuf_fd;
uint32_t flink_name;
struct wl_buffer *buf;
EGLImageKHR khrImage;
};
struct output {
struct display *display;
struct wl_output *output;
struct wl_list link;
};
struct display {
struct wl_display *display;
struct wl_registry *registry;
struct wl_compositor *compositor;
struct ias_shell *ias_shell;
struct wl_shell *wl_shell;
struct ivi_application *ivi_application;
struct wl_drm *wl_drm;
struct window *window;
struct wl_list output_list;
int fd;
dri_bufmgr *bufmgr;
struct buffer *buffers;
struct v4l2_device *v4l2;
struct setup *s;
struct {
EGLDisplay dpy;
EGLContext ctx;
EGLConfig conf;
} egl;
};
struct geometry {
int width, height;
};
struct window {
struct display *display;
struct geometry geometry, window_size;
struct wl_surface *surface;
void *shell_surface;
struct ivi_surface *ivi_surface;
struct wl_egl_window *native;
EGLSurface egl_surface;
struct wl_callback *callback;
int fullscreen, opaque, configured, output;
int print_fps, frame_count;
struct {
GLuint fbo;
GLuint color_rbo;
GLuint modelview_uniform;
GLuint gl_texture_size;
GLuint gl_texture[2];
GLuint tex_top;
GLuint tex_bottom;
GLuint rgb565;
GLuint swap_rb;
GLuint interlaced;
GLuint pos;
GLuint col;
GLuint attr_tex;
GLuint program;
GLfloat hmi_vtx[12u]; //!< hold coordinates of vertices for texture
GLfloat hmi_tex[8u]; //!< hold indices of vertices for texture
GLubyte hmi_ind[6u]; //!< hold coordinates for texture sample (conversion to rgba)
GLfloat model_view[16u];
} gl;
};
struct time_measurements
{
struct timespec app_start_time;
struct timespec before_md_init_time;
struct timespec md_init_time;
struct timespec weston_init_time;
struct timespec v4l2_init_time;
struct timespec rendering_init_time;
struct timespec streamon_time;
struct timespec first_frame_time;
struct timespec first_frame_rendered_time;
} time_measurements;
static void signal_int(int signum);
static void video_set_buf_type(struct v4l2_device *dev, enum v4l2_buf_type type)
{
dev->type = type;
}
static int v4l2_buf_type_from_string(const char *str)
{
unsigned int i;
for (i = 0; i < ARRAY_SIZE(buf_types); i++) {
if (!buf_types[i].supported)
continue;
if (strcmp(buf_types[i].string, str))
continue;
return buf_types[i].type;
}
return -1;
}
static bool video_is_mplane(struct v4l2_device *dev)
{
return (dev->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE ||
dev->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE);
}
double clock_diff(struct timespec startTime, struct timespec endTime)
{
struct timespec diff;
if ((endTime.tv_nsec - startTime.tv_nsec) >= 0)
{
diff.tv_sec = endTime.tv_sec - startTime.tv_sec;
diff.tv_nsec = endTime.tv_nsec - startTime.tv_nsec;
} else {
diff.tv_sec = endTime.tv_sec - startTime.tv_sec - 1;
diff.tv_nsec = 1000000000 + endTime.tv_nsec - startTime.tv_nsec;
}
return diff.tv_sec * 1000 + (double)(diff.tv_nsec) / 1000000;;
}
void print_time_measurement(const char* name, struct timespec start, struct timespec end)
{
double diff;
double ts;
diff = clock_diff(start, end);
ts = end.tv_sec + (double)end.tv_nsec/1000000000;
printf("%-25s | %6.03f s | %6.02f ms\n", name, ts, diff);
}
void print_time_measurements()
{
printf("DMA TEST TIME STATS\n");
printf("%-25s | %-10s | %-6s\n", "Tracepoint", "System ts", "Time since app start");
print_time_measurement("App start", time_measurements.app_start_time, time_measurements.app_start_time);
print_time_measurement("Media ctl setup", time_measurements.app_start_time, time_measurements.md_init_time);
print_time_measurement("V4L2 setup", time_measurements.app_start_time, time_measurements.v4l2_init_time);
print_time_measurement("IPU streamon", time_measurements.app_start_time, time_measurements.streamon_time);
print_time_measurement("Weston ready", time_measurements.app_start_time, time_measurements.weston_init_time);
print_time_measurement("EGL/GL setup", time_measurements.app_start_time, time_measurements.rendering_init_time);
print_time_measurement("First frame received", time_measurements.app_start_time, time_measurements.first_frame_time);
print_time_measurement("First frame displayed", time_measurements.app_start_time, time_measurements.first_frame_rendered_time);
}
int first_frame_received = 0;
int first_frame_rendered = 0;
#define GET_TS(t) clock_gettime(CLOCK_MONOTONIC, &t)
static int running = 1;
static int error_recovery = 0;
struct buffer *cur_top_buffer = NULL;
struct buffer *cur_bottom_buffer = NULL;
static struct output *
get_default_output(struct display *display)
{
struct output *iter;
int counter = 0;
wl_list_for_each(iter, &display->output_list, link) {
if(counter++ == display->window->output)
return iter;
}
/* Unreachable, but avoids compiler warning */
return NULL;
}
static GLuint
create_shader(struct window *window, const char *source, GLenum shader_type)
{
GLuint shader;
GLint status;
shader = glCreateShader(shader_type);
assert(shader != 0);
glShaderSource(shader, 1, (const char **) &source, NULL);
glCompileShader(shader);
glGetShaderiv(shader, GL_COMPILE_STATUS, &status);
if (!status) {
char log[1000];
GLsizei len;
glGetShaderInfoLog(shader, 1000, &len, log);
fprintf(stderr, "Error: compiling %s: %*s\n",
shader_type == GL_VERTEX_SHADER ? "vertex" : "fragment",
len, log);
exit(1);
}
return shader;
}
static void
handle_ping(void *data, struct wl_shell_surface *shell_surface,
uint32_t serial)
{
wl_shell_surface_pong(shell_surface, serial);
}
static void
handle_configure(void *data, struct wl_shell_surface *shell_surface,
uint32_t edges, int32_t width, int32_t height)
{
struct window *window = data;
window->geometry.width = width;
window->geometry.height = height;
if (!window->fullscreen)
window->window_size = window->geometry;
}
static void
handle_popup_done(void *data, struct wl_shell_surface *shell_surface)
{
}
static struct wl_shell_surface_listener wl_shell_surface_listener = {
handle_ping,
handle_configure,
handle_popup_done
};
static void
ias_handle_ping(void *data, struct ias_surface *ias_surface,
uint32_t serial)
{
ias_surface_pong(ias_surface, serial);
}
static void
ias_handle_configure(void *data, struct ias_surface *ias_surface,
int32_t width, int32_t height)
{
struct window *window = data;
window->geometry.width = width;
window->geometry.height = height;
if (!window->fullscreen)
window->window_size = window->geometry;
}
static struct ias_surface_listener ias_surface_listener = {
ias_handle_ping,
ias_handle_configure,
};
static void
ivi_handle_configure(void *data, struct ivi_surface *ivi_surface,
int32_t width, int32_t height) {
struct window *window = data;
wl_egl_window_resize(window->native, width, height, 0, 0);
window->geometry.width = width;
window->geometry.height = height;
if (!window->fullscreen)
window->window_size = window->geometry;
}
static const struct ivi_surface_listener ivi_surface_listener = {
ivi_handle_configure,
};
static void
redraw(void *data, struct wl_callback *callback, uint32_t time);
static void
configure_callback(void *data, struct wl_callback *callback, uint32_t time)
{
struct window *window = data;
wl_callback_destroy(callback);
window->configured = 1;
if (window->callback == NULL)
redraw(data, NULL, time);
}
static struct wl_callback_listener configure_callback_listener = {
configure_callback,
};
static void
toggle_fullscreen(struct window *window, int fullscreen)
{
struct wl_callback *callback;
struct display *display = window->display;
window->fullscreen = fullscreen;
window->configured = 0;
if (fullscreen) {
if (display->ias_shell) {
ias_surface_set_fullscreen(window->shell_surface,
get_default_output(display)->output);
}
if (display->wl_shell) {
wl_shell_surface_set_fullscreen(window->shell_surface,
WL_SHELL_SURFACE_FULLSCREEN_METHOD_DEFAULT,
0, NULL);
}
} else {
if (display->ias_shell) {
ias_surface_unset_fullscreen(window->shell_surface, window->window_size.width, window->window_size.height);
ias_shell_set_zorder(display->ias_shell,
window->shell_surface, 0);
}
if (display->wl_shell) {
wl_shell_surface_set_toplevel(window->shell_surface);
}
handle_configure(window, window->shell_surface, 0,
window->window_size.width,
window->window_size.height);
}
callback = wl_display_sync(window->display->display);
wl_callback_add_listener(callback, &configure_callback_listener,
window);
}
static void
destroy_surface(struct window *window)
{
struct display *display = window->display;
if (display->s->render_type != RENDER_TYPE_WL) {
/* Required, otherwise segfault in egl_dri2.c: dri2_make_current()
* on eglReleaseThread(). */
eglMakeCurrent(window->display->egl.dpy, EGL_NO_SURFACE, EGL_NO_SURFACE,
EGL_NO_CONTEXT);
eglDestroySurface(window->display->egl.dpy, window->egl_surface);
wl_egl_window_destroy(window->native);
}
if (display->ias_shell) {
ias_surface_destroy(window->shell_surface);
}
if (display->wl_shell) {
wl_shell_surface_destroy(window->shell_surface);
}
wl_surface_destroy(window->surface);
if (window->callback)
wl_callback_destroy(window->callback);
}
static const struct wl_callback_listener frame_listener;
static void
update_fps(struct window *window)
{
float time_diff_secs;
struct timeval time_diff;
struct timeval *tmp;
if (window->print_fps) {
window->frame_count++;
gettimeofday(curr_time, NULL);
timersub(curr_time, prev_time, &time_diff);
time_diff_secs = (time_diff.tv_sec * 1000 + time_diff.tv_usec / 1000) / 1000;
if (time_diff_secs >= TARGET_NUM_SECONDS) {
fprintf(stdout, "Rendered %d frames in %6.3f seconds = %6.3f FPS\n",
window->frame_count, time_diff_secs, window->frame_count / time_diff_secs);
fflush(stdout);
window->frame_count = 0;
tmp = prev_time;
prev_time = curr_time;
curr_time = tmp;
}
}
}
static void v4l2_queue_buffer(struct v4l2_device *dev, const struct buffer *buffer)
{
struct v4l2_buffer buf;
int ret;
struct v4l2_plane planes[VIDEO_MAX_PLANES];
memset(&planes, 0, sizeof planes);
memset(&buf, 0, sizeof buf);
if (video_is_mplane(dev)) {
buf.m.planes = planes;
buf.length = dev->num_planes;
}
buf.type = dev->type;
if(dev->is_exporter) {
buf.memory = V4L2_MEMORY_MMAP;
} else {
buf.memory = V4L2_MEMORY_DMABUF;
buf.m.fd = buffer->dbuf_fd;
}
buf.index = buffer->index;
ret = ioctl(dev->fd, VIDIOC_QBUF, &buf);
if (ret) {
error_recovery = 1;
signal_int(0);
}
}
static struct buffer *v4l2_dequeue_buffer(struct v4l2_device *dev, struct buffer *buffers)
{
struct v4l2_buffer buf;
int ret;
struct v4l2_plane planes[VIDEO_MAX_PLANES];
memset(&buf, 0, sizeof buf);
memset(planes, 0, sizeof planes);
buf.type = dev->type;
buf.length = VIDEO_MAX_PLANES;
buf.m.planes = planes;
if(dev->is_exporter) {
buf.memory = V4L2_MEMORY_MMAP;
} else {
buf.memory = V4L2_MEMORY_DMABUF;
}
ret = ioctl(dev->fd, VIDIOC_DQBUF, &buf);
if (ret)
return NULL;
if(buf.field == V4L2_FIELD_TOP) {
buffers[buf.index].field_type = FIELD_TYPE_TOP;
} else if(buf.field == V4L2_FIELD_BOTTOM) {
buffers[buf.index].field_type = FIELD_TYPE_BOTTOM;
} else {
buffers[buf.index].field_type = FIELD_TYPE_NONE;
}
return &buffers[buf.index];
}
static void make_orth_matrix(GLfloat *data, GLfloat left, GLfloat right,
GLfloat bottom, GLfloat top,
GLfloat znear, GLfloat zfar)
{
data[0] = 2.0/(right-left);
data[5] = 2.0/(top-bottom);
data[10] = -2.0/(zfar-znear);
data[15] = 1.0;
data[12] = (right+left)/(right-left);
data[13] = (top+bottom)/(top-bottom);
data[14] = (zfar+znear)/(zfar-znear);
}
static void make_matrix(GLfloat *data, GLfloat v)
{
make_orth_matrix(data, -v, v, -v, v, -v, v);
}
static void redraw_wl_way(struct window *window, struct wl_buffer *buf, uint32_t time)
{
wl_surface_attach(window->surface, buf, 0, 0);
wl_surface_damage(window->surface, 0, 0, window->display->s->iw, window->display->s->ih);
wl_surface_commit(window->surface);
if (first_frame_received == 1 && first_frame_rendered == 0) {
first_frame_rendered = 1;
GET_TS(time_measurements.first_frame_rendered_time);
print_time_measurements();
}
}
static void redraw_egl_way(struct window *window, struct buffer *top_buf,
struct buffer *bottom_buf, unsigned char *start_top, unsigned char *start_bottom)
{
int width, height;
glViewport(0, 0, window->geometry.width, window->geometry.height);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(0.0, 0.0, 0.0, 0.0); // full transparency
glActiveTexture(GL_TEXTURE0);
width = window->display->s->iw;
height = window->display->s->ih;
if (window->display->s->in_fourcc == V4L2_MBUS_FMT_UYVY8_1X16 ||
window->display->s->in_fourcc == V4L2_MBUS_FMT_YUYV8_1X16) {
width = window->display->s->iw/2;
}
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, window->gl.gl_texture[0]);
if (window->display->s->render_type == RENDER_TYPE_GL_DMA) {
glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, top_buf->khrImage);
} else {
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height,
GL_RGBA, GL_UNSIGNED_BYTE, start_top);
}
if (window->display->s->interlaced) {
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, window->gl.gl_texture[1]);
if (window->display->s->render_type == RENDER_TYPE_GL_DMA) {
glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, bottom_buf->khrImage);
} else {
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height,
GL_RGBA, GL_UNSIGNED_BYTE, start_bottom);
}
glActiveTexture(GL_TEXTURE0);
}
glUseProgram(window->gl.program);
glUniformMatrix4fv(window->gl.modelview_uniform, 1, GL_FALSE, window->gl.model_view);
glVertexAttribPointer(window->gl.pos, 3, GL_FLOAT, GL_FALSE, 0, window->gl.hmi_vtx);
glVertexAttribPointer(window->gl.attr_tex, 2, GL_FLOAT, GL_FALSE, 0,
window->gl.hmi_tex);
glEnableVertexAttribArray(window->gl.pos);
glEnableVertexAttribArray(window->gl.attr_tex);
glDrawElements(GL_TRIANGLES, 2*3, GL_UNSIGNED_BYTE, window->gl.hmi_ind);
glDisableVertexAttribArray(window->gl.pos);
glDisableVertexAttribArray(window->gl.attr_tex);
glBindTexture(GL_TEXTURE_2D, 0);
wl_surface_set_opaque_region(window->surface, NULL);
eglSwapBuffers(window->display->egl.dpy, window->egl_surface);
if (first_frame_received == 1 && first_frame_rendered == 0) {
first_frame_rendered = 1;
GET_TS(time_measurements.first_frame_rendered_time);
print_time_measurements();
}
}
static void
redraw(void *data, struct wl_callback *callback, uint32_t time)
{
struct window *window = data;
struct buffer * buf_top = (cur_top_buffer)
? cur_top_buffer
: &(window->display->buffers[0]);
struct buffer * buf_bottom = (cur_bottom_buffer)
? cur_bottom_buffer
: &(window->display->buffers[0]);
unsigned char *start_top;
unsigned char *start_bottom;
start_top = (unsigned char *) buf_top->bo->virtual;
start_bottom = (unsigned char *) buf_bottom->bo->virtual;
if (callback)
wl_callback_destroy(callback);
window->callback = wl_surface_frame(window->surface);
wl_callback_add_listener(window->callback, &frame_listener, window);
update_fps(window);
if (window->display->s->render_type == RENDER_TYPE_WL) {
redraw_wl_way(window, buf_top->buf, time);
} else {
redraw_egl_way(window, buf_top, buf_bottom, start_top, start_bottom);
}
}
static const struct wl_callback_listener frame_listener = {
redraw
};
static void
display_add_output(struct display *d, uint32_t id)
{
struct output *output;
output = malloc(sizeof *output);
if (output == NULL)
return;
memset(output, 0, sizeof *output);
output->display = d;
output->output =
wl_registry_bind(d->registry, id, &wl_output_interface, 1);
wl_list_insert(d->output_list.prev, &output->link);
}
static void
registry_handle_global(void *data, struct wl_registry *registry,
uint32_t name, const char *interface, uint32_t version)
{
struct display *d = data;
if (strcmp(interface, "wl_compositor") == 0) {
d->compositor =
wl_registry_bind(registry, name,
&wl_compositor_interface, 1);
} else if (strcmp(interface, "wl_shell") == 0) {
if (!d->ias_shell && !d->ivi_application) {
d->wl_shell = wl_registry_bind(registry, name,
&wl_shell_interface, 1);
}
} else if (strcmp(interface, "ias_shell") == 0) {
if (!d->wl_shell && !d->ivi_application) {
d->ias_shell = wl_registry_bind(registry, name,
&ias_shell_interface, 1);
}
} else if (strcmp(interface, "ivi_application") == 0) {
if (!d->ias_shell && !d->wl_shell) {
d->ivi_application = wl_registry_bind(registry, name,
&ivi_application_interface, 1);
}
} else if (strcmp(interface, "wl_output") == 0) {
display_add_output(d, name);
} else if (!strcmp(interface, "wl_drm")) {
d->wl_drm =
wl_registry_bind(registry, name, &wl_drm_interface, 1);
}
}
static const struct wl_registry_listener registry_listener = {
registry_handle_global
};
static void
signal_int(int signum)
{
running = 0;
}
static int
init_gem(struct display *display)
{
/* Init GEM */
display->fd = drmOpen("i915", NULL);
if (display->fd < 0)
return -1;
/* In case that drm will be opened before weston will do it,
* master mode needs to be released otherwiser weston won't initialize
*/
drmDropMaster(display->fd);
display->bufmgr = intel_bufmgr_gem_init(display->fd, BATCH_SIZE);
if (display->bufmgr == NULL)
return -1;
return 0;
}
static void
destroy_gem(struct display *display)
{
/* Free the GEM buffer */
drm_intel_bufmgr_destroy(display->bufmgr);
drmClose(display->fd);
}
static int
drm_buffer_to_prime(struct display *display, struct buffer *buffer, unsigned int size)
{
int ret;
buffer->bo = drm_intel_bo_gem_create_from_prime(display->bufmgr,
buffer->dbuf_fd, (int) size);
if(!buffer->bo) {
printf("ERROR: Couldn't create from prime\n");
return -1;
}
/* Do a mmap once */
ret = drm_intel_gem_bo_map_gtt(buffer->bo);
if(ret) {
printf("ERROR: Couldn't map buffer->bo\n");
return -1;
}
ret = drm_intel_bo_flink(buffer->bo, &buffer->flink_name);
if (ret) {
printf("ERROR: Couldn't flink buffer\n");
return -1;
}
return 0;
}
static int
create_buffer(struct display *display, struct buffer *buffer, unsigned int size)
{
int ret;
buffer->bo = drm_intel_bo_alloc_for_render(display->bufmgr,
"display surface",
size,
0);
if (buffer->bo == NULL)
return -1;
struct drm_prime_handle prime;
memset(&prime, 0, sizeof prime);
prime.handle = buffer->bo->handle;
ret = ioctl(display->fd, DRM_IOCTL_PRIME_HANDLE_TO_FD, &prime);
if (WARN_ON(ret, "PRIME_HANDLE_TO_FD failed: %s\n", ERRSTR))
return -1;
buffer->dbuf_fd = prime.fd;
ret = drm_intel_bo_flink(buffer->bo, &buffer->flink_name);
if (ret) {
printf("ERROR: Couldn't flink buffer\n");
return -1;
}
/* Do a mmap once */
ret = drm_intel_bo_map(buffer->bo, 1);
if(ret) {
printf("ERROR: Couldn't map buf->bo\n");
return -1;
} /* Do a mmap once */
return 0;
}
static void usage(char *name)
{
fprintf(stderr, "usage: %s [-bFfhidMopSst]\n", name);
fprintf(stderr, "\nCapture options:\n\n");
fprintf(stderr, "\t-d <video-node>\tset video node (default: auto detect)\n");
fprintf(stderr, "\t-I <width,height>\tset input resolution\n");
fprintf(stderr, "\t-O <width,height>\tset output resolution\n");
fprintf(stderr, "\t-i\tinterlace\n");
fprintf(stderr, "\t-n\tport number (0 for HDMI, 4 for camera)\n");
fprintf(stderr, "\t-p\tBuffer type (\"capture\", \"output\", \"capture-mplane\" or \"output-mplane\")\n");
fprintf(stderr, "\nGeneric options:\n\n");
fprintf(stderr, "\t-b buffer_count\tset number of buffers\n");
fprintf(stderr, "\t-N frames_count\tnumber of frames to display (0 = no limit)\n");
fprintf(stderr, "\t-l loops\tnumber of loops to be run (0 = no limit)\n");
fprintf(stderr, "\t-m\tskips media controller setup\n");
fprintf(stderr, "\t-h\tshow this help\n");
}
static inline int parse_rect(char *s, struct v4l2_rect *r)
{
return sscanf(s, "%d,%d@%d,%d", &r->width, &r->height,
&r->left, &r->top) != 4;
}
static int parse_args(int argc, char *argv[], struct setup *s)
{
if(argc <= 1) {
usage(argv[0]);
return -1;
}
int c, ret;
memset(s, 0, sizeof(*s));
s->mplane_type=ret=V4L2_BUF_TYPE_VIDEO_CAPTURE;
while((c = getopt(argc, argv, "b:f:h:d:iI:O:n:Er:p:N:ml:")) != -1) {
switch (c) {
case 'b':
ret = sscanf(optarg, "%u", &s->buffer_count);
if (WARN_ON(ret != 1, "incorrect buffer count\n"))
return -1;
break;
case 'f':
if (WARN_ON(strlen(optarg) != 4, "invalid fourcc\n"))
return -1;
if (strncmp(optarg, "UYVY", 4) == 0) {
s->in_fourcc = V4L2_MBUS_FMT_UYVY8_1X16;
} else if (strncmp(optarg, "YUYV", 4) == 0) {
s->in_fourcc = V4L2_MBUS_FMT_YUYV8_1X16;
} else if (strncmp(optarg, "RGB3", 4) == 0) {
s->in_fourcc = V4L2_MBUS_FMT_RGB888_1X24;
} else {
/*By default fallback to RGB565 */
s->in_fourcc = MEDIA_BUS_FMT_RGB565_1X16;
}
break;
case '?':
case 'h':
usage(argv[0]);
return -1;
case 'd':
strncpy(s->video, optarg, 31);
break;
case 'I':
ret = sscanf(optarg, "%u,%u", &s->iw, &s->ih);
if (WARN_ON(ret != 2, "incorrect input size\n"))
return -1;
s->use_wh = 1;
s->original_iw = s->iw;
break;
case 'O':
ret = sscanf(optarg, "%u,%u", &s->ow, &s->oh);
if (WARN_ON(ret != 2, "incorrect output size\n"))
return -1;
break;
case 'p':
ret = v4l2_buf_type_from_string(optarg);
if (ret == -1) {
printf("Bad buffer type \"%s\"\n", optarg);
return ret;
}
s->mplane_type=ret;
break;
case 'n':
ret = sscanf(optarg, "%u", &s->port);
break;
case 'E':
s->exporter = 1;
break;
case 'i':
s->interlaced = 1;
break;
case 'r':
if (strncmp(optarg, "GL_DMA", 6) == 0) {
s->render_type = RENDER_TYPE_GL_DMA;
} else if (strncmp(optarg, "GL", 2) == 0 ){
s->render_type = RENDER_TYPE_GL;
} else {
s->render_type = RENDER_TYPE_WL;
}
break;
case 'N':
s->frames_count = atoi(optarg);
break;
case 'l':
s->loops_count = atoi(optarg);
break;
case 'm':
s->skip_media_controller_setup = 1;
break;
}
}
return 0;
}
#define V4L2_SUBDEV_ROUTE_FL_ACTIVE (1 << 0)
#define V4L2_SUBDEV_ROUTE_FL_IMMUTABLE (1 << 1)
#define V4L2_SUBDEV_ROUTE_FL_SOURCE (1 << 2)
/**
* struct v4l2_subdev_route - A signal route inside a subdev
* @sink_pad: the sink pad
* @sink_stream: the sink stream
* @source_pad: the source pad
* @source_stream: the source stream
* @flags: route flags:
*
* V4L2_SUBDEV_ROUTE_FL_ACTIVE: Is the stream in use or not? An
* active stream will start when streaming is enabled on a video
* node. Set by the user.
*
* V4L2_SUBDEV_ROUTE_FL_SOURCE: Is the sub-device the source of a
* stream? In this case the sink information is unused (and
* zero). Set by the driver.
*
* V4L2_SUBDEV_ROUTE_FL_IMMUTABLE: Is the stream immutable, i.e.
* can it be activated and inactivated? Set by the driver.
*/
struct v4l2_subdev_route {
__u32 sink_pad;
__u32 sink_stream;
__u32 source_pad;
__u32 source_stream;
__u32 flags;
__u32 reserved[5];
};
/**
* struct v4l2_subdev_routing - Routing information
* @routes: the routes array
* @num_routes: the total number of routes in the routes array
*/
struct v4l2_subdev_routing {
struct v4l2_subdev_route *routes;
__u32 num_routes;
__u32 reserved[5];
};
#define VIDIOC_SUBDEV_G_ROUTING _IOWR('V', 38, struct v4l2_subdev_routing)
#define VIDIOC_SUBDEV_S_ROUTING _IOWR('V', 39, struct v4l2_subdev_routing)
char* find_entity(struct media_device* md, const char* entity_name)
{
struct media_entity *entity = NULL;
const struct media_entity_desc *entity_desc = NULL;
int entities_count = media_get_entities_count(md);
int i;
char *full_entity_name;
for (i = 0; i < entities_count; i++) {
entity = media_get_entity(md, i);
entity_desc = media_entity_get_info(entity);
if (strncmp(entity_name, entity_desc->name, strlen(entity_name)) == 0) {
full_entity_name = strdup(entity_desc->name);
return full_entity_name;
}
}
return NULL;
}
const char* get_entity_devname(struct media_device* md, const char* entity_name)
{
struct media_entity * entity = NULL;
entity = media_get_entity_by_name(md, entity_name);
if (!entity) {
printf("Cannot find entity %s\n", entity_name);
return NULL;
}
return media_entity_get_devname(entity);
}
int set_fmt(struct media_device* md, const char* entity_name, unsigned int width, unsigned int height, int fmt, int interlaced, unsigned int pad)
{
struct media_entity * entity = NULL;
entity = media_get_entity_by_name(md, entity_name);
if (!entity) {
printf("Cannot find entity %s\n", entity_name);
return -1;
}
int ret = v4l2_subdev_open(entity);
if (ret < 0) {
printf("Cannot open subdev\n");
return -1;
}
struct v4l2_mbus_framefmt format;
format.width = width;
format.height = height;
format.code = fmt;
format.field = V4L2_FIELD_NONE;
if (interlaced) {
format.field = V4L2_FIELD_ALTERNATE;
}
ret = v4l2_subdev_set_format(entity, &format, pad, V4L2_SUBDEV_FORMAT_ACTIVE);
if (ret < 0) {
printf("Cannot set format\n");
return -1;
}
v4l2_subdev_close(entity);
return 0;
}
int set_ctrl(struct media_device *md, const char* entity_name, int ctrl_id, int ctrl_value)
{
int ret;
struct media_entity * entity = NULL;
entity = media_get_entity_by_name(md, entity_name);
if (!entity) {
printf("Cannot find entity %s\n", entity_name);
return -1;
}
const char* subdev_node = media_entity_get_devname(entity);
int subdev_fd = open(subdev_node, O_RDWR);
if (subdev_fd < 0) {
printf("Cannot open subdev\n");
return -1;
}
struct v4l2_control ctrl;
ctrl.id = ctrl_id;
ctrl.value = ctrl_value;
ret = ioctl(subdev_fd, VIDIOC_S_CTRL, &ctrl);
if (ret < 0) {
printf("unable to set control %s %d\n", strerror(ret), ret);
return -1;
}
close(subdev_fd);
return 0;
}
int setup_routing(struct media_device *md, const char* entity_name)
{
int ret;
struct media_entity * entity = NULL;
entity = media_get_entity_by_name(md, entity_name);
if (!entity) {
printf("Cannot find entity %s\n", entity_name);
return -1;
}
const char* subdev_node = media_entity_get_devname(entity);
int subdev_fd = open(subdev_node, O_RDWR);
if (subdev_fd < 0) {
printf("Cannot open subdev\n");
return -1;
}
struct v4l2_subdev_routing routing;
struct v4l2_subdev_route route[2];
route[0].sink_pad = 0;
route[0].source_pad = 8;
route[0].sink_stream = 0;
route[0].source_stream = 0;
route[0].flags = V4L2_SUBDEV_ROUTE_FL_ACTIVE;
route[1].sink_pad = 4;
route[1].source_pad = 12;
route[1].sink_stream = 0;
route[1].source_stream = 0;
route[1].flags = V4L2_SUBDEV_ROUTE_FL_ACTIVE;
routing.routes = &route[0];
routing.num_routes = 2;
ret = ioctl(subdev_fd, VIDIOC_SUBDEV_S_ROUTING, &routing);
if (ret < -1) {
printf("unable to set routing %s %d\n", strerror(ret), ret);
close(subdev_fd);
return -1;
}
close(subdev_fd);
return 0;
}
int set_compose(struct media_device *md, const char* entity_name, int width, int height, int pad)
{
struct media_entity * entity = NULL;
entity = media_get_entity_by_name(md, entity_name);
if (!entity) {
printf("Cannot find entity %s\n", entity_name);
return -1;
}
int ret = v4l2_subdev_open(entity);
if (ret < 0) {
printf("Cannot open subdev\n");
return -1;
}
struct v4l2_rect rect;
rect.left = 0;
rect.top = 0;
rect.width = width;
rect.height = height;
ret = v4l2_subdev_set_selection(entity, &rect, pad, V4L2_SEL_TGT_COMPOSE, V4L2_SUBDEV_FORMAT_ACTIVE);
if (ret < 0) {
printf("Cannot set crop\n");
return -1;
}
v4l2_subdev_close(entity);
return 0;
}
int setup_link(struct media_device* md, const char* source_entity_name, int source_pad_number,
const char* sink_entity_name, int sink_pad_number,
int flags)
{
struct media_entity * source_entity = NULL;
struct media_entity * sink_entity = NULL;
struct media_pad* source_pad = NULL;
struct media_pad* sink_pad = NULL;
source_entity = media_get_entity_by_name(md, source_entity_name);
if (!source_entity) {
printf("Cannot find entity %s\n", source_entity_name);
return -1;
}
source_pad = (struct media_pad*)media_entity_get_pad(source_entity, source_pad_number);
if (!source_pad) {
printf("Cannot find pad %d of entity %s\n", source_pad_number, source_entity_name);
return -1;
}
sink_entity = media_get_entity_by_name(md, sink_entity_name);
if (!sink_entity) {
printf("Cannot find entity %s\n", sink_entity_name);
return -1;
}
sink_pad = (struct media_pad*)media_entity_get_pad(sink_entity, sink_pad_number);
if (!sink_pad) {
printf("Cannot find pad %d of entity %s\n", sink_pad_number, sink_entity_name);
return -1;
}
int ret = media_setup_link(md, source_pad, sink_pad, MEDIA_LNK_FL_ENABLED | (flags));
if (ret < 0) {
printf("Cannot setup link %d\n", ret);
return -1;
}
return 0;
}
static void media_controller_init(struct setup* s)
{
int ret;
struct media_device* md = NULL;
md = media_device_new("/dev/media0");
BYE_ON(!md, "Cannot create media device\n");
ret = media_device_enumerate(md);
BYE_ON(ret, "Cannot enumerate media device\n");
char* adv_pa = NULL;
char* adv_binner = NULL;
const char* ipu4_csi = "Intel IPU4 CSI-2 0";
ret = setup_routing(md, "Intel IPU4 CSI2 BE SOC");
BYE_ON(ret, "Cannot setup routing\n");
if (s->port == 0) {
adv_pa = find_entity(md, "adv7481-hdmi pixel array a");
adv_binner = find_entity(md, "adv7481-hdmi binner a");
} else if (s->port == 4) {
adv_pa = find_entity(md, "adv7481-cvbs pixel array a");
adv_binner = find_entity(md, "adv7481-cvbs binner a");
ipu4_csi = "Intel IPU4 CSI-2 4";
}
BYE_ON((adv_pa == NULL || adv_binner == NULL), "Cannot find pixel array and binner entities\n");
ret = set_fmt(md, ipu4_csi, s->iw, s->ih, s->in_fourcc, s->interlaced, 0);
BYE_ON(ret, "Cannot set format for entity %s[%d]\n", ipu4_csi, 0);
if (s->port == 0) {
ret = set_fmt(md, adv_pa, 1920, 1080, s->in_fourcc, s->interlaced, 0);
BYE_ON(ret, "Cannot set format for entity %s[%d]\n", adv_pa, 0);
ret = set_fmt(md, adv_binner, 1920, 1080, s->in_fourcc, s->interlaced, 0);
BYE_ON(ret, "Cannot set format for entity %s[%d]\n", adv_binner, 0);
ret = set_fmt(md, "Intel IPU4 CSI2 BE SOC", s->iw, s->ih, s->in_fourcc, s->interlaced, 0);
BYE_ON(ret, "Cannot set format for entity Intel IPU4 CSI2 BE SOC[0]\n");
ret = set_fmt(md, "Intel IPU4 CSI2 BE SOC", s->iw, s->ih, s->in_fourcc, s->interlaced, 8);
BYE_ON(ret, "Cannot set format for entity Intel IPU4 CSI2 BE SOC[8]\n");
} else {
ret = set_fmt(md, adv_pa, s->iw, s->ih, s->in_fourcc, s->interlaced, 0);
BYE_ON(ret, "Cannot set format for entity %s[%d]\n", adv_pa, 0);
ret = set_fmt(md, adv_binner, s->iw, s->ih, s->in_fourcc, s->interlaced, 0);
BYE_ON(ret, "Cannot set format for entity %s[%d]\n", adv_binner, 0);
ret = set_fmt(md, "Intel IPU4 CSI2 BE SOC", s->iw, s->ih, s->in_fourcc, s->interlaced, 4);
BYE_ON(ret, "Cannot set format for entity Intel IPU4 CSI2 BE SOC[4]\n");
ret = set_fmt(md, "Intel IPU4 CSI2 BE SOC", s->iw, s->ih, s->in_fourcc, s->interlaced, 12);
BYE_ON(ret, "Cannot set format for entity Intel IPU4 CSI2 BE SOC[12]\n");
}
ret = set_compose(md, adv_binner, s->iw, s->ih, 0);
BYE_ON(ret, "Cannot set compose for entity %s[%d]\n", adv_binner, 0);
ret = set_fmt(md, adv_binner, s->iw, s->ih, s->in_fourcc, s->interlaced, 1);
BYE_ON(ret, "Cannot set format for entity %s[%d]\n", adv_binner, 1);
/* ADV7481 HDMI input provides two link frequencies,
* in such case correct one for given color format needs
* to be specified by application for YUV/RGB565 link frequency
* with index 0 needs to be used for RGB888 with index 1*/
if (s->in_fourcc == V4L2_MBUS_FMT_RGB888_1X24) {
ret = set_ctrl(md, adv_binner, V4L2_CID_LINK_FREQ, 1);
} else {
ret = set_ctrl(md, adv_binner, V4L2_CID_LINK_FREQ, 0);
}
BYE_ON(ret, "Cannot set LINK FREQ ctrl for entity %s\n", adv_binner);
ret = setup_link(md, adv_pa, 0, adv_binner, 0, 0);
BYE_ON(ret, "Cannot settup link between %s[%d] -> %s[%d]\n", adv_pa, 0, adv_binner, 0);
ret = setup_link(md, adv_binner, 1, ipu4_csi, 0, 0);
BYE_ON(ret, "Cannot settup link between %s[%d] -> %s[%d]\n", adv_binner, 1, ipu4_csi, 0);
if (s->port == 0) {
ret = setup_link(md, ipu4_csi, 1, "Intel IPU4 CSI2 BE SOC", 0, MEDIA_LNK_FL_DYNAMIC);
BYE_ON(ret, "Cannot settup link between %s[%d] -> %s[%d]\n", ipu4_csi, 1, "Intel IPU4 CSI2 BE SOC", 4);
ret = setup_link(md, "Intel IPU4 CSI2 BE SOC", 8, "Intel IPU4 BE SOC capture 0", 0, MEDIA_LNK_FL_DYNAMIC );
BYE_ON(ret, "Cannot settup link between %s[%d] -> %s[%d]\n", "Intel IPU4 CSI2 BE SOC", 8, "Intel IPU4 BE SOC capture 0", 0);
if (strlen(s->video) == 0) {
strncpy(s->video, get_entity_devname(md, "Intel IPU4 BE SOC capture 0"), 31);
}
} else {
ret = setup_link(md, ipu4_csi, 1, "Intel IPU4 CSI2 BE SOC", 4, MEDIA_LNK_FL_DYNAMIC);
BYE_ON(ret, "Cannot settup link between %s[%d] -> %s[%d]\n", ipu4_csi, 1, "Intel IPU4 CSI2 BE SOC", 4);
ret = setup_link(md, "Intel IPU4 CSI2 BE SOC", 12, "Intel IPU4 BE SOC capture 4", 0, MEDIA_LNK_FL_DYNAMIC );
BYE_ON(ret, "Cannot settup link between %s[%d] -> %s[%d]\n", "Intel IPU4 CSI2 BE SOC", 12, "Intel IPU4 BE SOC capture 4", 0);
if (strlen(s->video) == 0) {
strncpy(s->video, get_entity_devname(md, "Intel IPU4 BE SOC capture 4"), 31);
}
}
media_device_unref(md);
free(adv_pa);
free(adv_binner);
}
static int video_querycap(struct v4l2_device *dev, unsigned int *capabilities)
{
struct v4l2_capability cap;
unsigned int caps;
int ret;
CLEAR(cap);
ret = ioctl(dev->fd, VIDIOC_QUERYCAP, &cap);
BYE_ON(ret, "VIDIOC_QUERYCAP failed: %s\n", ERRSTR);
caps = (cap.capabilities & V4L2_CAP_DEVICE_CAPS)
? cap.device_caps : cap.capabilities;
printf("Device `%s' on `%s' is a video %s (%s mplanes) device.\n",
cap.card, cap.bus_info,
caps & (V4L2_CAP_VIDEO_CAPTURE_MPLANE | V4L2_CAP_VIDEO_CAPTURE) ? "capture" : "output",
caps & (V4L2_CAP_VIDEO_CAPTURE_MPLANE | V4L2_CAP_VIDEO_OUTPUT_MPLANE) ? "with" : "without");
*capabilities = caps;
return 0;
}
static int cap_get_buf_type(unsigned int capabilities)
{
if (capabilities & V4L2_CAP_VIDEO_CAPTURE_MPLANE) {
return V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
} else if (capabilities & V4L2_CAP_VIDEO_OUTPUT_MPLANE) {
return V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
} else if (capabilities & V4L2_CAP_VIDEO_CAPTURE) {
return V4L2_BUF_TYPE_VIDEO_CAPTURE;
} else if (capabilities & V4L2_CAP_VIDEO_OUTPUT) {
return V4L2_BUF_TYPE_VIDEO_OUTPUT;
} else {
printf("Device supports neither capture nor output.\n");
return -EINVAL;
}
return 0;
}
static int video_set_format(struct v4l2_device *dev, unsigned int w,
unsigned int h, unsigned int format, enum v4l2_field field,
unsigned int stride, unsigned int buffer_size)
{
struct v4l2_format fmt;
unsigned int i;
int ret;
CLEAR(fmt);
fmt.type = dev->type;
if (video_is_mplane(dev)) {
fmt.fmt.pix_mp.width = w;
fmt.fmt.pix_mp.height = h;
fmt.fmt.pix_mp.pixelformat = format;
fmt.fmt.pix_mp.field = field;
fmt.fmt.pix_mp.num_planes = 1;
for (i = 0; i < fmt.fmt.pix_mp.num_planes; i++) {
fmt.fmt.pix_mp.plane_fmt[i].bytesperline = w*4;
fmt.fmt.pix_mp.plane_fmt[i].sizeimage = buffer_size;
}
} else {
fmt.fmt.pix.width = w;
fmt.fmt.pix.height = h;
fmt.fmt.pix.pixelformat = format;
fmt.fmt.pix.field = field;
fmt.fmt.pix.priv = V4L2_PIX_FMT_PRIV_MAGIC;
}
ret = ioctl(dev->fd, VIDIOC_S_FMT, &fmt);
if (ret < 0) {
printf("Unable to set format: %s (%d).\n", strerror(errno),
errno);
return ret;
}
if (video_is_mplane(dev)) {
for (i = 0; i < fmt.fmt.pix_mp.num_planes; i++) {
printf(" * Stride %u, buffer size %u\n",
fmt.fmt.pix_mp.plane_fmt[i].bytesperline,
fmt.fmt.pix_mp.plane_fmt[i].sizeimage);
}
}
return 0;
}
static int video_get_format(struct v4l2_device *dev)
{
struct v4l2_format fmt;
unsigned int i;
int ret;
CLEAR(fmt);
fmt.type = dev->type;
ret = ioctl(dev->fd, VIDIOC_G_FMT, &fmt);
if (ret < 0) {
printf("Unable to get format: %s (%d).\n", strerror(errno),
errno);
return ret;
}
if (video_is_mplane(dev)) {
dev->num_planes = fmt.fmt.pix_mp.num_planes;
for (i = 0; i < fmt.fmt.pix_mp.num_planes; i++) {
dev->plane_fmt[i].bytesperline =
fmt.fmt.pix_mp.plane_fmt[i].bytesperline;
dev->plane_fmt[i].sizeimage =
fmt.fmt.pix_mp.plane_fmt[i].bytesperline ?
fmt.fmt.pix_mp.plane_fmt[i].sizeimage : 0;
printf(" * Stride %u, buffer size %u\n",
fmt.fmt.pix_mp.plane_fmt[i].bytesperline,
fmt.fmt.pix_mp.plane_fmt[i].sizeimage);
}
} else {
dev->num_planes = 1;
dev->plane_fmt[0].bytesperline = fmt.fmt.pix.bytesperline;
dev->plane_fmt[0].sizeimage = fmt.fmt.pix.bytesperline ? fmt.fmt.pix.sizeimage : 0;
}
return 0;
}
static void v4l2_init(struct v4l2_device *dev, struct setup s)
{
int ret;
struct v4l2_format fmt;
struct v4l2_streamparm parm;
struct v4l2_requestbuffers rqbufs;
/* Use video capture by default if query isn't done. */
unsigned int capabilities = V4L2_CAP_VIDEO_CAPTURE;
video_set_buf_type(dev, s.mplane_type);
CLEAR(parm);
parm.parm.capture.capturemode = _ISP_MODE_STILL;
dev->fd = open(dev->devname, O_RDONLY);
BYE_ON(dev->fd < 0, "failed to open %s: %s\n", dev->devname, ERRSTR);
ret = video_querycap(dev, &capabilities);
BYE_ON(ret, "VIDIOC_QUERYCAP failed: %s\n", ERRSTR);
CLEAR(fmt);
fmt.type = cap_get_buf_type(capabilities);
ret = video_get_format(dev);
BYE_ON(ret < 0, "VIDIOC_G_FMT failed: %s, %s\n", dev->devname, ERRSTR);
printf("G_FMT(start): width = %u, height = %u, 4cc = %.4s\n",
fmt.fmt.pix.width, fmt.fmt.pix.height,
(char*)&fmt.fmt.pix.pixelformat);
fmt.fmt.pix.width = s.iw;
fmt.fmt.pix.height = s.ih;
fmt.fmt.pix.pixelformat = s.in_fourcc;
if (s.in_fourcc == V4L2_MBUS_FMT_RGB888_1X24) {
fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_XBGR32;
} else if (s.in_fourcc == MEDIA_BUS_FMT_RGB565_1X16) {
fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_XRGB32;
} else if (s.in_fourcc == V4L2_MBUS_FMT_UYVY8_1X16) {
fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_UYVY;
} else if (s.in_fourcc == V4L2_MBUS_FMT_YUYV8_1X16) {
fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_YUYV;
}
fmt.fmt.pix.field = V4L2_FIELD_NONE;
if (s.interlaced) {
fmt.fmt.pix.field = V4L2_FIELD_ALTERNATE;
}
ret = video_set_format(dev, fmt.fmt.pix.width, fmt.fmt.pix.height,
fmt.fmt.pix.pixelformat, fmt.fmt.pix.field,
OPT_STRIDE,OPT_BUFFER_SIZE);
BYE_ON(ret < 0, "VIDIOC_S_FMT failed: %s\n", ERRSTR);
ret = video_get_format(dev);
printf("G_FMT(final): width = %u, height = %u, 4cc = %.4s\n",
fmt.fmt.pix.width, fmt.fmt.pix.height,
(char*)&fmt.fmt.pix.pixelformat);
CLEAR(rqbufs);
rqbufs.count = s.buffer_count;
rqbufs.type = dev->type;
if(dev->is_exporter) {
rqbufs.memory = V4L2_MEMORY_MMAP;
} else {
rqbufs.memory = V4L2_MEMORY_DMABUF;
}
ret = ioctl(dev->fd, VIDIOC_REQBUFS, &rqbufs);
BYE_ON(ret < 0, "VIDIOC_REQBUFS failed: %s\n", ERRSTR);
BYE_ON(rqbufs.count < s.buffer_count, "video node allocated only "
"%u of %u buffers\n", rqbufs.count, s.buffer_count);
dev->format = fmt.fmt.pix;
}
static void polling_thread(void *data)
{
struct display *display = (struct display *)data;
struct pollfd fd;
unsigned int received_frames;
unsigned int total_received_frames;
struct timeval prev_time, curr_time;
struct timeval time_diff;
struct timeval tmp;
float time_diff_secs;
int poll_res;
fd.fd = display->v4l2->fd;
fd.events = POLLIN;
gettimeofday(&prev_time, NULL);
received_frames = 0;
total_received_frames = 0;
while(running) {
poll_res = poll(&fd, 1, 500);
if (poll_res < 0) {
signal_int(0);
return;
} else if (poll_res > 0) {
if (fd.revents & POLLERR){
printf("Received IPU error - recovering\n");
error_recovery = 1;
signal_int(0);
return;
} else if(fd.revents & POLLIN) {
struct buffer *buf = v4l2_dequeue_buffer(display->v4l2, display->buffers);
if(buf) {
v4l2_queue_buffer(display->v4l2,
&display->buffers[buf->index]);
if (buf->field_type == FIELD_TYPE_BOTTOM) {
cur_bottom_buffer = buf;
} else {
cur_top_buffer = buf;
}
}
if (first_frame_received == 0) {
first_frame_received = 1;
GET_TS(time_measurements.first_frame_time);
}
received_frames++;
total_received_frames++;
if (display->s->frames_count != 0 && total_received_frames >= display->s->frames_count) {
running = 0;
}
gettimeofday(&curr_time, NULL);
timersub(&curr_time, &prev_time, &time_diff);
time_diff_secs = (time_diff.tv_sec * 1000 + time_diff.tv_usec / 1000) / 1000;
if (time_diff_secs >= TARGET_NUM_SECONDS) {
fprintf(stdout, "Received %d frames from IPU in %6.3f seconds = %6.3f FPS\n",
received_frames, time_diff_secs, received_frames / time_diff_secs);
fflush(stdout);
received_frames = 0;
tmp = prev_time;
prev_time = curr_time;
curr_time = tmp;
}
}
}
}
}
static void
init_egl(struct display *display, int opaque)
{
const char* egl_extensions = NULL;
static const EGLint context_attribs[] = {
EGL_CONTEXT_CLIENT_VERSION, 2,
EGL_NONE
};
EGLint config_attribs[] = {
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_RED_SIZE, 1,
EGL_GREEN_SIZE, 1,
EGL_BLUE_SIZE, 1,
EGL_ALPHA_SIZE, 1,
EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
EGL_NONE
};
EGLint major, minor, n;
EGLBoolean ret;
if (opaque)
config_attribs[9] = 0;
display->egl.dpy = eglGetDisplay((EGLNativeDisplayType) display->display);
assert(display->egl.dpy);
ret = eglInitialize(display->egl.dpy, &major, &minor);
assert(ret == EGL_TRUE);
ret = eglBindAPI(EGL_OPENGL_ES_API);
assert(ret == EGL_TRUE);
ret = eglChooseConfig(display->egl.dpy, config_attribs,
&display->egl.conf, 1, &n);
assert(ret && n == 1);
display->egl.ctx = eglCreateContext(display->egl.dpy,
display->egl.conf,
EGL_NO_CONTEXT, context_attribs);
assert(display->egl.ctx);
egl_extensions = eglQueryString(display->egl.dpy, EGL_EXTENSIONS);
if (strstr(egl_extensions, "EGL_KHR_image_base") &&
strstr(egl_extensions, "EXT_image_dma_buf_import")) {
eglCreateImageKHR = (PFNEGLCREATEIMAGEKHRPROC) eglGetProcAddress("eglCreateImageKHR");
eglDestroyImageKHR = (PFNEGLDESTROYIMAGEKHRPROC) eglGetProcAddress("eglDestroyImageKHR");
}
BYE_ON(eglCreateImageKHR == NULL, "EGL_KHR_image_base and EXT_image_dma_buf_import not supported\n");
BYE_ON(eglDestroyImageKHR == NULL, "EGL_KHR_image_base and EXT_image_dma_buf_import not supported\n");
}
static void
create_surface(struct window *window)
{
struct display *display = window->display;
EGLBoolean ret;
uint32_t ivi_surf_id;
window->surface = wl_compositor_create_surface(display->compositor);
if (display->ias_shell) {
window->shell_surface = ias_shell_get_ias_surface(display->ias_shell,
window->surface, "DMA Test");
ias_surface_add_listener(window->shell_surface,
&ias_surface_listener, window);
}
if (display->wl_shell) {
window->shell_surface = wl_shell_get_shell_surface(display->wl_shell,
window->surface);
wl_shell_surface_add_listener(window->shell_surface,
&wl_shell_surface_listener, window);
}
if (display->ivi_application) {
ivi_surf_id = (uint32_t) getpid();
window->ivi_surface =
ivi_application_surface_create(display->ivi_application,
ivi_surf_id, window->surface);
ivi_surface_add_listener(window->ivi_surface,
&ivi_surface_listener, window);
}
if (display->wl_shell) {
wl_shell_surface_set_title(window->shell_surface, "dma-test");
}
if (display->s->render_type != RENDER_TYPE_WL) {
window->native =
wl_egl_window_create(window->surface,
window->window_size.width,
window->window_size.height);
window->egl_surface =
eglCreateWindowSurface(display->egl.dpy,
display->egl.conf,
(EGLNativeWindowType) window->native, NULL);
ret = eglMakeCurrent(window->display->egl.dpy, window->egl_surface,
window->egl_surface, window->display->egl.ctx);
assert(ret == EGL_TRUE);
}
toggle_fullscreen(window, window->fullscreen);
}
static void
init_gl_shaders(struct window *window)
{
GLint status;
GLuint frag, vert;
FILE* pf;
GLint shader_size;
GLenum shader_format;
char *shader_binary;
unsigned int got_binary_shader = 0;
unsigned int color_format;
window->gl.program = glCreateProgram();
if (glProgramBinaryOES) {
pf = fopen("shader.bin", "rb");
if (pf) {
size_t result_sz = fread(&shader_size, sizeof(shader_size), 1, pf);
if(result_sz != 1) {
BYE_ON(pf, "Failed to fopen shader file due to invalid shader size\n");
}
size_t result_sf = fread(&shader_format, sizeof(shader_format), 1, pf);
if(result_sf != 1) {
BYE_ON(pf, "Failed to fopen shader file due to invalid shader format\n");
}
size_t result_cf = fread(&color_format, sizeof(color_format), 1, pf);
if(result_cf != 1) {
BYE_ON(pf, "Failed to fopen shader file due to invalid color format\n");
}
if (color_format == window->display->s->in_fourcc) {
shader_binary = malloc(shader_size);
size_t result_ss = fread(shader_binary, shader_size, 1, pf);
if(result_ss != 1) {
BYE_ON(pf, "Failed to fopen shader file due to invalid shader file malloc.\n");
}
glProgramBinaryOES(window->gl.program,
shader_format,
shader_binary,
shader_size);
free(shader_binary);
got_binary_shader = 1;
}
fclose(pf);
};
}
if (!got_binary_shader) {
vert = create_shader(window, vert_shader_text, GL_VERTEX_SHADER);
if (window->display->s->in_fourcc == V4L2_MBUS_FMT_UYVY8_1X16) {
frag = create_shader(window, frag_shader_text_UYVY, GL_FRAGMENT_SHADER);
} else if (window->display->s->in_fourcc == V4L2_MBUS_FMT_YUYV8_1X16 &&
window->display->s->render_type != RENDER_TYPE_GL_DMA) {
/* Use YUVY shader only when imporing data as RGBA888
* texuture (ie. using RENDER_TYPE_GL), if texture is
* being created directly from DMA buffer,
* UFO will automatically convert YUYV into RGB when sampling,
* so in that case regular RGB shader needs to be used
*/
frag = create_shader(window, frag_shader_text_YUYV, GL_FRAGMENT_SHADER);
} else {
frag = create_shader(window, frag_shader_text_RGB, GL_FRAGMENT_SHADER);
}
glAttachShader(window->gl.program, frag);
glAttachShader(window->gl.program, vert);
glLinkProgram(window->gl.program);
}
glGetProgramiv(window->gl.program, GL_LINK_STATUS, &status);
if (!status) {
char log[1000];
GLsizei len;
glGetProgramInfoLog(window->gl.program, 1000, &len, log);
fprintf(stderr, "Error: linking:\n%*s\n", len, log);
exit(1);
}
if (glProgramBinaryOES && !got_binary_shader) {
glGetProgramiv(window->gl.program, GL_PROGRAM_BINARY_LENGTH_OES,
&shader_size);
shader_binary = malloc(shader_size);
glGetProgramBinaryOES(window->gl.program, shader_size, NULL,
&shader_format, shader_binary);
pf = fopen("shader.bin", "wb");
if (pf) {
fwrite(&shader_size, sizeof(shader_size), 1, pf);
fwrite(&shader_format, sizeof(shader_format), 1, pf);
fwrite(&window->display->s->in_fourcc, sizeof(window->display->s->in_fourcc), 1, pf);
fwrite(shader_binary, 1, shader_size, pf);
fclose(pf);
}
free(shader_binary);
}
}
static void
init_gl(struct window *window)
{
GLsizei texture_width = window->display->s->iw >> 1;
const GLfloat HMI_W = 1.f;
const GLfloat HMI_H = 1.f;
const GLfloat HMI_Z = 0.f;
const char* gl_extensions = NULL;
GLint num_binary_program_formats = 0;
/*
* If input stream width was changed becasue it was not multiply of 32, crop additionaly added pixels
* that will be filled by IPU with padding
*/
float width_correction = (float)(window->display->s->original_iw) / window->display->s->iw;
gl_extensions = (const char *) glGetString(GL_EXTENSIONS);
if (strstr(gl_extensions, "GL_OES_get_program_binary")) {
glGetIntegerv(GL_PROGRAM_BINARY_FORMATS_OES, &num_binary_program_formats);
if (num_binary_program_formats) {
glProgramBinaryOES = (PFNGLPROGRAMBINARYOESPROC) eglGetProcAddress("glProgramBinaryOES");
glGetProgramBinaryOES = (PFNGLGETPROGRAMBINARYOESPROC) eglGetProcAddress("glGetProgramBinaryOES");
}
}
if (strstr(gl_extensions, "GL_OES_EGL_image_external")) {
glEGLImageTargetTexture2DOES = (PFNGLEGLIMAGETARGETTEXTURE2DOESPROC) eglGetProcAddress("glEGLImageTargetTexture2DOES");
}
BYE_ON(glEGLImageTargetTexture2DOES == NULL, "glEGLImageTargetTexture2DOES not supported\n");
init_gl_shaders(window);
glUseProgram(window->gl.program);
window->gl.pos = glGetAttribLocation(window->gl.program, "pos");
window->gl.col = glGetAttribLocation(window->gl.program, "color");
window->gl.attr_tex = glGetAttribLocation(window->gl.program, "itexcoord");
window->gl.modelview_uniform =
glGetUniformLocation(window->gl.program, "modelviewProjection");
window->gl.gl_texture_size = glGetUniformLocation(window->gl.program, "u_texsize");
glUniform2f(window->gl.gl_texture_size,
(float)texture_width,
(float)window->display->s->ih);
glEnable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
window->gl.tex_top = glGetUniformLocation(window->gl.program, "u_texture_top");
window->gl.tex_bottom = glGetUniformLocation(window->gl.program, "u_texture_bottom");
glGenTextures(2, window->gl.gl_texture);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, window->gl.gl_texture[0]);
if (window->display->s->in_fourcc == V4L2_MBUS_FMT_UYVY8_1X16 ||
window->display->s->in_fourcc == V4L2_MBUS_FMT_YUYV8_1X16) {
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, window->display->s->iw/2,
window->display->s->ih, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
} else {
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, window->display->s->iw,
window->display->s->ih, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
window->gl.rgb565 = glGetUniformLocation(window->gl.program, "rgb565");
glUniform1i(window->gl.rgb565, 0);
if (window->display->s->in_fourcc == MEDIA_BUS_FMT_RGB565_1X16)
glUniform1i(window->gl.rgb565, 1);
}
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, window->gl.gl_texture[1]);
if (window->display->s->in_fourcc == V4L2_MBUS_FMT_UYVY8_1X16 ||
window->display->s->in_fourcc == V4L2_MBUS_FMT_YUYV8_1X16) {
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, window->display->s->iw/2,
window->display->s->ih, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
} else {
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, window->display->s->iw,
window->display->s->ih, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
window->gl.rgb565 = glGetUniformLocation(window->gl.program, "rgb565");
glUniform1i(window->gl.rgb565, 0);
if (window->display->s->in_fourcc == MEDIA_BUS_FMT_RGB565_1X16)
glUniform1i(window->gl.rgb565, 1);
}
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glActiveTexture(GL_TEXTURE0);
glUniform1i(window->gl.tex_top, 0);
glUniform1i(window->gl.tex_bottom, 1);
window->gl.swap_rb = glGetUniformLocation(window->gl.program, "swap_rb");
/*
* Because GLES does not support BGRA format, red and blue components must
* be swapped in shader, when GL_DMA rendering method is used, texture is
* created using BGRA layout and swap is not required
*/
glUniform1i(window->gl.swap_rb, window->display->s->render_type == RENDER_TYPE_GL);
window->gl.interlaced = glGetUniformLocation(window->gl.program, "interlaced");
glUniform1i(window->gl.interlaced, window->display->s->interlaced);
glClearColor(.5, .5, .5, .20);
make_matrix(window->gl.model_view, 1.0);
window->gl.hmi_vtx[0] = -HMI_W;
window->gl.hmi_vtx[1] = HMI_H;
window->gl.hmi_vtx[2] = HMI_Z;
window->gl.hmi_vtx[3] = -HMI_W;
window->gl.hmi_vtx[4] = -HMI_H;
window->gl.hmi_vtx[5] = HMI_Z;
window->gl.hmi_vtx[6] = HMI_W;
window->gl.hmi_vtx[7] = HMI_H;
window->gl.hmi_vtx[8] = HMI_Z;
window->gl.hmi_vtx[9] = HMI_W;
window->gl.hmi_vtx[10] = -HMI_H;
window->gl.hmi_vtx[11] = HMI_Z;
window->gl.hmi_tex[0] = 0.0f;
window->gl.hmi_tex[1] = 0.0f;
window->gl.hmi_tex[2] = 0.0f;
window->gl.hmi_tex[3] = 1.0f;
window->gl.hmi_tex[4] = width_correction;
window->gl.hmi_tex[5] = 0.0f;
window->gl.hmi_tex[6] = width_correction;
window->gl.hmi_tex[7] = 1.0f;
window->gl.hmi_ind[0] = 0;
window->gl.hmi_ind[1] = 1;
window->gl.hmi_ind[2] = 3;
window->gl.hmi_ind[3] = 0;
window->gl.hmi_ind[4] = 3;
window->gl.hmi_ind[5] = 2;
}
static struct buffer *v4l2_expbuffer(
struct v4l2_device *dev, unsigned int index, struct buffer *buf)
{
int ret = 0;
struct v4l2_exportbuffer expbuf;
memset(&expbuf,0,sizeof(expbuf));
expbuf.type = dev->type;
expbuf.index = buf->index;
ret = ioctl(dev->fd, VIDIOC_EXPBUF, &expbuf);
BYE_ON(ret < 0, "VIDIOC_EXPBUF failed: %s\n", ERRSTR);
buf->dbuf_fd = expbuf.fd;
return buf;
}
int
main(int argc, char **argv)
{
GET_TS(time_measurements.app_start_time);
struct v4l2_device v4l2;
struct setup s;
struct sigaction sigint;
struct display display = { 0 };
struct window window = { 0 };
int i, ret = 0;
unsigned int src_size;
pthread_t poll_thread;
struct stat tmp;
char wayland_path[255];
ret = parse_args(argc, argv, &s);
BYE_ON(ret, "failed to parse arguments\n");
GET_TS(time_measurements.before_md_init_time);
if (!s.skip_media_controller_setup) {
media_controller_init(&s);
}
memset(&v4l2, 0, sizeof v4l2);
v4l2.devname = s.video;
if (s.use_wh) {
v4l2.format.width = s.iw;
v4l2.format.height = s.ih;
}
if(!s.ow || !s.oh) {
s.ow = s.iw;
s.oh = s.ih;
}
if (s.in_fourcc)
v4l2.format.pixelformat = s.in_fourcc;
if(s.exporter) {
v4l2.is_exporter = 1;
} else {
v4l2.is_exporter = 0;
}
GET_TS(time_measurements.md_init_time);
v4l2_init(&v4l2, s);
GET_TS(time_measurements.v4l2_init_time);
/* width for IPU must be multiply of 32, currently driver has a bug
* and it's not returning updated resolution of stream after S_FMT ioctl,
* it has to be modified here manually
*/
if (s.iw % 32 != 0) {
s.iw += (s.iw % 32);
}
struct buffer buffers[s.buffer_count];
for(i = 0; i < (int) s.buffer_count; i++) {
buffers[i].index = i;
}
window.display = &display;
display.window = &window;
window.window_size.width = s.ow;
window.window_size.height = s.oh;
window.fullscreen = s.fullscreen;
window.output = 0;
window.print_fps = 1;
display.s = &s;
display.buffers = buffers;
display.v4l2 = &v4l2;
if (s.in_fourcc == V4L2_MBUS_FMT_UYVY8_1X16 ||
s.in_fourcc == V4L2_MBUS_FMT_YUYV8_1X16) {
src_size = s.iw * s.ih * 2;
} else {
src_size = s.iw * s.ih * 4;
}
ret = init_gem(&display);
if(ret < 0) {
return ret;
}
for(i = 0; i < (int) s.buffer_count; i++) {
if(v4l2.is_exporter) {
v4l2_expbuffer(&v4l2, i, &buffers[i]);
ret = drm_buffer_to_prime(&display, &buffers[i], src_size);
if(ret < 0) {
return ret;
}
} else {
ret = create_buffer(&display, &buffers[i], src_size);
if(ret < 0) {
destroy_gem(&display);
return ret;
}
}
v4l2_queue_buffer(&v4l2, &buffers[i]);
}
int type = s.mplane_type;
ret = ioctl(v4l2.fd, VIDIOC_STREAMON, &type);
BYE_ON(ret < 0, "STREAMON failed: %s\n", ERRSTR);
GET_TS(time_measurements.streamon_time);
if(pthread_create(&poll_thread, NULL,
(void *) &polling_thread, (void *) &display)) {
printf("Couldn't create polling thread\n");
}
snprintf(wayland_path, 255, "%s/wayland-0", getenv("XDG_RUNTIME_DIR"));
while (stat(wayland_path, &tmp) != 0) {
usleep(100);
}
GET_TS(time_measurements.weston_init_time);
display.display = wl_display_connect(NULL);
assert(display.display);
wl_list_init(&display.output_list);
display.registry = wl_display_get_registry(display.display);
wl_registry_add_listener(display.registry,
®istry_listener, &display);
wl_display_dispatch(display.display);
wl_display_roundtrip(display.display);
if (s.render_type != RENDER_TYPE_WL) {
init_egl(&display, window.opaque);
}
create_surface(&window);
if (s.render_type != RENDER_TYPE_WL) {
init_gl(&window);
}
restart:
for(i = 0; i < (int) s.buffer_count; i++) {
if (s.render_type == RENDER_TYPE_WL) {
if (s.in_fourcc == V4L2_MBUS_FMT_UYVY8_1X16 ||
s.in_fourcc == MEDIA_BUS_FMT_RGB565_1X16) {
//UYVY cannot be displayed using direct fliping it is possible only with YUYV
//RGB565 can be displayed but as data is mapped to RGB888 it will have wrong color ie. image will have green tint
BYE_ON(1, "RGB565 and UYUV formats are not supported with RENDER_TYPE_WL\n");
} else if (s.in_fourcc == V4L2_MBUS_FMT_YUYV8_1X16) {
buffers[i].buf = wl_drm_create_buffer(display.wl_drm, buffers[i].flink_name, s.iw, s.ih,
s.iw*2, WL_DRM_FORMAT_YUYV);
} else {
buffers[i].buf = wl_drm_create_buffer(display.wl_drm, buffers[i].flink_name, s.iw, s.ih,
s.iw*4, WL_DRM_FORMAT_XRGB8888);
}
} else if (s.render_type == RENDER_TYPE_GL_DMA) {
if (s.in_fourcc == V4L2_MBUS_FMT_YUYV8_1X16) {
EGLint imageAttributes[] = {
EGL_WIDTH, s.iw,
EGL_HEIGHT, s.ih,
EGL_LINUX_DRM_FOURCC_EXT, DRM_FORMAT_YUYV,
EGL_DMA_BUF_PLANE0_FD_EXT, buffers[i].dbuf_fd,
EGL_DMA_BUF_PLANE0_OFFSET_EXT, 0,
EGL_DMA_BUF_PLANE0_PITCH_EXT, s.iw*2,
EGL_NONE
};
buffers[i].khrImage = eglCreateImageKHR(display.egl.dpy, EGL_NO_CONTEXT, EGL_LINUX_DMA_BUF_EXT,
(EGLClientBuffer) NULL, imageAttributes);
} else if (s.in_fourcc == V4L2_MBUS_FMT_UYVY8_1X16) {
EGLint imageAttributes[] = {
EGL_WIDTH, s.iw/2,
EGL_HEIGHT, s.ih,
EGL_LINUX_DRM_FOURCC_EXT, DRM_FORMAT_ARGB8888,
EGL_DMA_BUF_PLANE0_FD_EXT, buffers[i].dbuf_fd,
EGL_DMA_BUF_PLANE0_OFFSET_EXT, 0,
EGL_DMA_BUF_PLANE0_PITCH_EXT, s.iw*2,
EGL_NONE
};
buffers[i].khrImage = eglCreateImageKHR(display.egl.dpy, EGL_NO_CONTEXT, EGL_LINUX_DMA_BUF_EXT,
(EGLClientBuffer) NULL, imageAttributes);
} else {
EGLint imageAttributes[] = {
EGL_WIDTH, s.iw,
EGL_HEIGHT, s.ih,
EGL_LINUX_DRM_FOURCC_EXT, DRM_FORMAT_ARGB8888,
EGL_DMA_BUF_PLANE0_FD_EXT, buffers[i].dbuf_fd,
EGL_DMA_BUF_PLANE0_OFFSET_EXT, 0,
EGL_DMA_BUF_PLANE0_PITCH_EXT, s.iw*4,
EGL_NONE
};
buffers[i].khrImage = eglCreateImageKHR(display.egl.dpy, EGL_NO_CONTEXT, EGL_LINUX_DMA_BUF_EXT,
(EGLClientBuffer) NULL, imageAttributes);
}
BYE_ON(buffers[i].khrImage == 0, "Cannot create texture from DMA buffer\n");
}
}
GET_TS(time_measurements.rendering_init_time);
sigint.sa_handler = signal_int;
sigemptyset(&sigint.sa_mask);
sigint.sa_flags = SA_RESETHAND;
sigaction(SIGINT, &sigint, NULL);
if (prev_time)
free(prev_time);
prev_time = calloc(1, sizeof(struct timeval));
if(!prev_time) {
fprintf(stderr, "Cannot allocate memory: %m\n");
exit(EXIT_FAILURE);
}
if (curr_time)
free(curr_time);
curr_time = calloc(1, sizeof(struct timeval));
if(!curr_time) {
fprintf(stderr, "Cannot allocate memory: %m\n");
exit(EXIT_FAILURE);
}
gettimeofday(prev_time, NULL);
while (running && ret != -1) {
ret = wl_display_dispatch(display.display);
}
fprintf(stderr, "\ndma-test finishing loop\n");
pthread_join(poll_thread, NULL);
if (error_recovery) {
for(i = 0; i < (int) s.buffer_count; i++) {
if (s.render_type == RENDER_TYPE_WL) {
wl_buffer_destroy(buffers[i].buf);
} else if (s.render_type == RENDER_TYPE_GL_DMA) {
eglDestroyImageKHR(display.egl.dpy, buffers[i].khrImage);
}
ret = drm_intel_bo_unmap(buffers[i].bo);
close(buffers[i].dbuf_fd);
drm_intel_bo_unreference(buffers[i].bo);
}
/* Close and reopen IPU device */
close(v4l2.fd);
s.iw = s.original_iw;
v4l2_init(&v4l2, s);
running = 1;
error_recovery = 0;
if (s.iw % 32 != 0) {
s.iw += (s.iw % 32);
}
for(i = 0; i < (int) s.buffer_count; i++) {
if(v4l2.is_exporter) {
v4l2_expbuffer(&v4l2, i, &buffers[i]);
ret = drm_buffer_to_prime(&display, &buffers[i], src_size);
if(ret < 0) {
return ret;
}
} else {
ret = create_buffer(&display, &buffers[i], src_size);
if(ret < 0) {
destroy_gem(&display);
return ret;
}
}
v4l2_queue_buffer(&v4l2, &buffers[i]);
}
type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
ret = ioctl(v4l2.fd, VIDIOC_STREAMON, &type);
BYE_ON(ret < 0, "STREAMON failed: %s\n", ERRSTR);
GET_TS(time_measurements.streamon_time);
if(pthread_create(&poll_thread, NULL,
(void *) &polling_thread, (void *) &display)) {
printf("Couldn't create polling thread\n");
}
goto restart;
} else if (s.loops_count--) {
type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
ret = ioctl(v4l2.fd, VIDIOC_STREAMOFF, &type);
BYE_ON(ret < 0, "STREAMOFF failed: %s\n", ERRSTR);
running = 1;
for(i = 0; i < (int) s.buffer_count; i++) {
v4l2_queue_buffer(&v4l2, &buffers[i]);
}
ret = ioctl(v4l2.fd, VIDIOC_STREAMON, &type);
if (ret < 0) {
printf("STREAMON ERROR\n");
running = 0;
error_recovery=1;
goto restart;
}
if(pthread_create(&poll_thread, NULL,
(void *) &polling_thread, (void *) &display)) {
printf("Couldn't create polling thread\n");
}
goto restart;
}
destroy_gem(&display);
destroy_surface(&window);
close(v4l2.fd);
free(curr_time);
free(prev_time);
if(display.ias_shell) {
ias_shell_destroy(display.ias_shell);
}
if(display.wl_shell) {
wl_shell_destroy(display.wl_shell);
}
if(display.ivi_application) {
ivi_application_destroy(display.ivi_application);
}
if(display.compositor) {
wl_compositor_destroy(display.compositor);
}
wl_display_flush(display.display);
wl_display_disconnect(display.display);
return 0;
}
|
Language Learning Beliefs of Thai EFL University Students: Dimensional Structure and Cultural Variations The objectives of this study were (a) to investigate the dimensional structure of the language learning beliefs of Thai learners of EFL, (b) to determine if the conceptually developed categories were empirically identifiable, and (c) to examine the cultural variations of language learning beliefs. Horwitzs Beliefs About Language Learning Inventory (BALLI) was administered to Thai EFL university students (N = 542). Through factor analysis, a five-factor structure was identified. This structure was similar to the Horwitz model with five categorical dimensions. Yet, some items clustered under a different category from that proposed in the BALLI model. Similarities were identified between Thai students and Taiwanese students in terms of the beliefs structure at the dimensional level and the strength of the beliefs at each item level. Seventeen BALLI items were both conceptually and empirically identified as constituting subcategories of the beliefs, representing the commonality of the language learning beliefs. |
////////////////////////////////////////////////////////////////////////////////
/*
* resolve indirect callee
* method 1 suffers from accuracy issue
* method 2 is too slow
* method 3 use the fact that most indirect call use function pointer loaded
* from struct(mi2m, kernel interface)
*/
FunctionSet gatlin::resolve_indirect_callee_ldcst_kmi(CallInst *ci, int &err,
int &kmi_cnt,
int &dkmi_cnt) {
FunctionSet fs;
#if (LLVM_VERSION_MAJOR <= 10)
Value *cv = ci->getCalledValue();
#else
Value *cv = ci->getCalledOperand();
#endif
if (StructType *ldbcstty = identify_ld_bcst_struct(cv)) {
#if 0
errs()<<"Found ld+bitcast sty to ptrty:";
if (ldbcstty->isLiteral())
errs()<<"Literal, ";
else
errs()<<ldbcstty->getName()<<", ";
#endif
Indices indices;
indices.push_back(0);
err = 2;
ModuleSet ms;
find_in_mi2m(ldbcstty, ms);
if (ms.size()) {
err = 1;
for (auto m : ms)
if (Value *v = get_value_from_composit(m, indices)) {
Function *f = dyn_cast<Function>(v);
assert(f);
fs.insert(f);
}
}
if (fs.size() != 0) {
kmi_cnt++;
goto end;
}
if (dmi_type_exists(ldbcstty, dmi))
err = 1;
indices.clear();
indices.push_back(0);
indices.push_back(0);
if (FunctionSet *_fs = dmi_exists(ldbcstty, indices, dmi)) {
for (auto *f : *_fs)
fs.insert(f);
dkmi_cnt++;
goto end;
}
#if 0
errs()<<"Try rkmi\n";
#endif
}
end:
if (fs.size())
err = 0;
return fs;
} |
from django.apps import AppConfig
class NngconsultingConfig(AppConfig):
name = 'nngconsulting'
|
This Christmas, I was not only the grateful recipient of an awesome group of gifts, but also clearly a victim of the work of a master stalker. My Secret Santa must've done a great job trawling through my many reddit postings to get ideas about what to get me, because I received just the perfect set of gifts.
I received -
A Gift voucher for a tour of the Emirates Stadium (for two people)
'David Mitchell's Soapbox' DVD
An 'Apocalyptica' album
A 'Potato Crisp Grabber'
Emirates Stadium tour -
It was probably quite clear from my post history that I am an Arsenal fan, so this gift was fantastic. I've never done it before, so I can't wait to go on it! An incredibly generous gift; my Secret Santa deserves an award for this alone.
Soapbox DVD -
Again, through the power of stalking, by SS deduced that a was a huge fan of David Mitchell, and they were right! This DVD displays his wit at its finest, a must-have for any Mitchell fan...having just read that sentence back to myself, it sounds like I've just copied that from one of quotes on the back cover. I promise that I have not.
Apocalyptica album -
In my description of my likes/hobbies, I mentioned that I loved discovering new music. This album is just the kind of thing I was looking for, unique and highly listenable. Thank you SS for introducing me to a whole new genre!
Crisp Grabber-
As the world's biggest Karl Pilkington fan, I cannot even begin to describe how epic this gift was, so I'll just leave this here. https://www.youtube.com/watch?v=yi5EeZsvhHU
Thank you so much SS, you're gifts genuinely made my Christmas a lot better. I was really lucky to get rematched with you. I would have never expected such generosity to come from someone whom I've never met, and is probably many miles away. Because you've made this such a wonderful experience, I'll definitely be doing this again next year. Again, thank you. :) |
Kid Told Westboro Protesters 'God Hates No One' Because, 'That Is True' : The Two-Way Josef Miles is a hero to many for his simple statement. He says he just doesn't like seeing Westboro Baptist's controversial signs protesting homosexuality, abortion and other issues.
Josef Miles, making his own statement.
"I just don't like seeing those signs and I kind of wanted to put a stop to that."
That's 9-year-old Josef Miles' simple explanation for why he held up a notepad that said "GOD HATES NO ONE" as supporters of the tiny Westboro Baptist Church staged another small demonstration featuring their signs that say God hates homosexuals.
His Mother's Day Weekend action in Topeka, Kan., which we we reported about last week, won Josef fans across the Web after photos of him started to spread. Today, he and his mom spoke with Tell Me More host Michel Martin.
Josef's mother, Patty Akrouche, told Michel that she and her son have often seen the Westboro Baptist protesters in Topeka, where the church is based. As we've said before, Westboro Baptist has gained notice in recent years for protesting against homosexuality, abortion and other issues outside the funerals of military veterans and celebrities.
Josef had in the past asked her about the signs, which feature an objectionable F-word when referring to homosexuals. Akrouche had told her son that the signs were using "a hate word" to refer to men who love men and women who love women.
As he reflected on that, Josef said, he decided that "I didn't want everybody to think that Topeka has a bad image." So on the day earlier this month when they came upon the protesters again, "I thought about it for a minute" and concluded that "God hates no one" would be the right thing to say.
Because "that is true," Josef said.
Akrouche told Michel that "it's a privilege and honor" to be Josef's mom. She has better conversations with him, she said, than with many adults: "I learn something new from him every day."
As for Josef, he felt "really brave and confident" that day (the Westboro protesters "were respectful," by the way, according to Akrouche). And now he's a little surprised by the attention he's gotten. "I thought it would be just, like, 'oh, that's really great, good for you,' " he said, not something that would go viral. |
Export and Import Elasticities for Japan : New Estimates This paper re-examines aggregate and disaggregate import and export demand functions for Japan. This re-examination is warranted the country has undergone substantial structural transformation, particularly with regard to the East Asian production chain. In the long run, nonfuel goods imports are highly income sensitive, while the price elasticity is near unity. Goods exports are similarly income sensitive. The price elasticity is around 0.7. In these preferred specifications, the Marshall-Lerner conditions hold, so that an exchange rate depreciation results in an improved trade balance. |
/**
* Substitutes at (un)marshalling time the XML namespaces used by SIS by the namespaces used in the XML document.
* This class is used internally by {@link FilteredStreamReader} and {@link FilteredStreamWriter} only.
*
* <div class="section">The problem</div>
* When the XML schemas of an international standard is updated, the URL of the namespace is often modified.
* For example when GML has been updated from version 3.1 to 3.2, the URL mandated by the international standard
* changed from {@code "http://www.opengis.net/gml"} to {@code "http://www.opengis.net/gml/3.2"}
* (XML namespaces usually have a version number or publication year - GML before 3.2 were an exception).
*
* The problem is that namespaces in JAXB annotations are static. The straightforward solution is
* to generate complete new set of classes for every GML version using the {@code xjc} compiler.
* But this approach has many inconvenient:
*
* <ul>
* <li>Massive code duplication (hundreds of classes, many of them strictly identical except for the namespace).</li>
* <li>Handling of above-cited classes duplication requires either a bunch of {@code if (x instanceof Y)} in every
* SIS corners (unconceivable), or to modify the {@code xjc} output in order to give to generated classes a
* common parent class or interface. In the later case, the auto-generated classes require significant work
* anyways.</li>
* <li>The namespaces of all versions appear in the {@code xmlns} attributes of the root element (we can not always
* create separated JAXB contexts), which is confusing and prevent usage of usual prefixes for all versions
* except one.</li>
* </ul>
*
* An alternative is to support only one version of each standard, and transform XML documents before unmarshalling
* or after marshalling if they use different versions of standards. We could use XSLT for that, but this is heavy.
* A lighter approach is to use {@link javax.xml.stream.XMLStreamReader} and {@link javax.xml.stream.XMLStreamWriter}
* as "micro-transformers".
*
* @author Martin Desruisseaux (Geomatys)
* @since 0.4
* @version 0.4
* @module
*
* @see <a href="http://issues.apache.org/jira/browse/SIS-152">SIS-152</a>
*/
final class FilteredNamespaces implements NamespaceContext {
/**
* The context to wrap, given by {@link FilteredStreamReader} or {@link FilteredStreamWriter}.
*
* @see javax.xml.stream.XMLStreamReader#getNamespaceContext()
* @see javax.xml.stream.XMLStreamWriter#getNamespaceContext()
*/
private final NamespaceContext context;
/**
* The URI replacements to apply when going from the wrapped context to the filtered context.
*
* @see FilterVersion#toView
*/
private final Map<String,String> toView;
/**
* The URI replacements to apply when going from the filtered context to the wrapped context.
* This map is the converse of {@link #toView}.
*
* @see FilterVersion#toImpl
*/
private final Map<String,String> toImpl;
/**
* Creates a new namespaces filter for the given target version.
*/
FilteredNamespaces(final NamespaceContext context, final FilterVersion version, final boolean inverse) {
this.context = context;
if (!inverse) {
toView = version.toView;
toImpl = version.toImpl;
} else {
toView = version.toImpl;
toImpl = version.toView;
}
}
/**
* Wraps this {@code FilteredNamespaces} in a new instance performing the inverse of the replacements
* specified by the given version.
*/
NamespaceContext inverse(final FilterVersion version) {
if (toView == version.toView && toImpl == version.toImpl) {
return this;
}
return new FilteredNamespaces(this, version, true);
}
/**
* Returns the URI to make visible to the user of this filter.
*/
private String toView(final String uri) {
final String replacement = toView.get(uri);
return (replacement != null) ? replacement : uri;
}
/**
* Returns the URI used by the {@linkplain #context}.
*/
private String toImpl(final String uri) {
final String replacement = toImpl.get(uri);
return (replacement != null) ? replacement : uri;
}
/**
* Returns the namespace for the given prefix.
*/
@Override
public String getNamespaceURI(final String prefix) {
return toView(context.getNamespaceURI(prefix));
}
/**
* Returns the prefix for the given namespace.
*/
@Override
public String getPrefix(final String namespaceURI) {
return context.getPrefix(toImpl(namespaceURI));
}
/**
* Returns all prefixes for the given namespace.
*/
@Override
@SuppressWarnings("unchecked")
public Iterator<String> getPrefixes(final String namespaceURI) {
return context.getPrefixes(toImpl(namespaceURI));
}
} |
There has been a lot of news about the tariffs imposed by the current administration. But the one thing I haven’t seen is that those of us in the industry have seen the cost of the materials we use double in the last couple of months.
Here at LaVanway Cutler a custom CNC machining manufacturer in Chico, we have seen the cost of raw material sky rocket. All these tariffs have just lowered our profit margin. We are just getting started and had hoped to hire some employees soon, but now that’s on hold while we try our best to do it all ourselves. I just wished someone would explain to me how this is going to help our everyday person if companies like ours cannot hire or must let go of employees to stay competitive in the market.
The simple facts are China can lower their cost per product because the companies there are subsidized. Now Canada is setting out 2 billion to help their companies in this industry. All this means they can maintain the current cost of their products and services, while we in the U.S. have to raise cost or close the doors. Maybe this wouldn’t be so bad, but we are in a global market not just a local one, and our customers overseas will be seeking services from these lower cost companies.
Good luck to all of you metal workers out there. I’m becoming afraid many of our industries wont be able to weather this storm. |
/* Copyright 2015 Codethink Ltd.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include "ofc/parse.h"
static ofc_parse_call_arg_t* ofc_parse__call_arg(
const ofc_sparse_t* src, const char* ptr,
ofc_parse_debug_t* debug,
bool named, bool force, unsigned* len)
{
ofc_parse_call_arg_t* call_arg
= (ofc_parse_call_arg_t*)malloc(
sizeof(ofc_parse_call_arg_t));
if (!call_arg) return NULL;
unsigned dpos = ofc_parse_debug_position(debug);
unsigned i = 0;
call_arg->name = OFC_SPARSE_REF_EMPTY;
if (named)
{
ofc_sparse_ref_t ident;
unsigned l = ofc_parse_ident(
src, &ptr[i], debug, &ident);
if ((l > 0) && (ptr[i + l] == '='))
{
call_arg->name = ident;
i += (l + 1);
}
else if (force)
{
free(call_arg);
ofc_parse_debug_rewind(debug, dpos);
return NULL;
}
}
call_arg->expr = NULL;
if ((ptr[i] == '*') || (ptr[i] == '&'))
{
i += 1;
unsigned l = 0;
call_arg->expr = ofc_parse_expr_integer_variable(
src, &ptr[i], debug, &l);
call_arg->type = ((call_arg->expr != NULL)
? OFC_PARSE_CALL_ARG_RETURN
: OFC_PARSE_CALL_ARG_ASTERISK);
i += l;
}
else
{
unsigned l;
call_arg->expr = ofc_parse_expr(
src, &ptr[i], debug, &l);
if (!call_arg->expr)
{
free(call_arg);
ofc_parse_debug_rewind(debug, dpos);
return NULL;
}
i += l;
call_arg->type = OFC_PARSE_CALL_ARG_EXPR;
}
call_arg->src = ofc_sparse_ref(src, ptr, i);
if (len) *len = i;
return call_arg;
}
ofc_parse_call_arg_t* ofc_parse_call_arg_force_named(
const ofc_sparse_t* src, const char* ptr,
ofc_parse_debug_t* debug,
unsigned* len)
{
return ofc_parse__call_arg(
src, ptr, debug,
true, true, len);
}
ofc_parse_call_arg_t* ofc_parse_call_arg_named(
const ofc_sparse_t* src, const char* ptr,
ofc_parse_debug_t* debug,
unsigned* len)
{
return ofc_parse__call_arg(
src, ptr, debug,
true, false, len);
}
ofc_parse_call_arg_t* ofc_parse_call_arg(
const ofc_sparse_t* src, const char* ptr,
ofc_parse_debug_t* debug,
unsigned* len)
{
return ofc_parse__call_arg(
src, ptr, debug,
false, false, len);
}
void ofc_parse_call_arg_delete(
ofc_parse_call_arg_t* call_arg)
{
if (!call_arg)
return;
ofc_parse_expr_delete(call_arg->expr);
free(call_arg);
}
bool ofc_parse_call_arg_print(
ofc_colstr_t* cs, const ofc_parse_call_arg_t* call_arg)
{
if (!call_arg)
return false;
if (!ofc_sparse_ref_empty(call_arg->name))
{
if (!ofc_sparse_ref_print(cs, call_arg->name)
|| !ofc_colstr_atomic_writef(cs, "="))
return false;
}
switch (call_arg->type)
{
case OFC_PARSE_CALL_ARG_RETURN:
case OFC_PARSE_CALL_ARG_ASTERISK:
if (!ofc_colstr_atomic_writef(cs, "*"))
return false;
break;
default:
break;
}
switch (call_arg->type)
{
case OFC_PARSE_CALL_ARG_RETURN:
case OFC_PARSE_CALL_ARG_EXPR:
if (!ofc_parse_expr_print(
cs, call_arg->expr))
return false;
break;
default:
break;
}
return true;
}
static ofc_parse_call_arg_list_t* ofc_parse_call_arg__list(
const ofc_sparse_t* src, const char* ptr,
ofc_parse_debug_t* debug,
bool named, bool force, unsigned* len)
{
ofc_parse_call_arg_list_t* list
= (ofc_parse_call_arg_list_t*)malloc(
sizeof(ofc_parse_call_arg_list_t));
if (!list) return NULL;
list->count = 0;
list->call_arg = NULL;
unsigned i = ofc_parse_list(src, ptr, debug, ',',
&list->count, (void***)&list->call_arg,
(named ? (force ? (void*)ofc_parse_call_arg_force_named
: (void*)ofc_parse_call_arg_named)
: (void*)ofc_parse_call_arg),
(void*)ofc_parse_call_arg_delete);
if (i == 0)
{
free(list);
return NULL;
}
if (len) *len = i;
return list;
}
ofc_parse_call_arg_list_t* ofc_parse_call_arg_list_force_named(
const ofc_sparse_t* src, const char* ptr,
ofc_parse_debug_t* debug,
unsigned* len)
{
return ofc_parse_call_arg__list(
src, ptr, debug, true, true, len);
}
ofc_parse_call_arg_list_t* ofc_parse_call_arg_list_named(
const ofc_sparse_t* src, const char* ptr,
ofc_parse_debug_t* debug,
unsigned* len)
{
return ofc_parse_call_arg__list(
src, ptr, debug, true, false, len);
}
ofc_parse_call_arg_list_t* ofc_parse_call_arg_list(
const ofc_sparse_t* src, const char* ptr,
ofc_parse_debug_t* debug,
unsigned* len)
{
return ofc_parse_call_arg__list(
src, ptr, debug, false, false, len);
}
ofc_parse_call_arg_list_t* ofc_parse_call_arg_list_wrap(
ofc_parse_call_arg_t* arg)
{
if (!arg)
return NULL;
ofc_parse_call_arg_list_t* list
= (ofc_parse_call_arg_list_t*)malloc(
sizeof(ofc_parse_call_arg_list_t));
if (!list) return NULL;
list->call_arg = (ofc_parse_call_arg_t**)malloc(
sizeof(ofc_parse_call_arg_t*));
if (!list->call_arg)
{
free(list);
return NULL;
}
list->count = 1;
list->call_arg[0] = arg;
return list;
}
void ofc_parse_call_arg_list_delete(
ofc_parse_call_arg_list_t* list)
{
if (!list)
return;
ofc_parse_list_delete(
list->count, (void**)list->call_arg,
(void*)ofc_parse_call_arg_delete);
free(list);
}
bool ofc_parse_call_arg_list_print(
ofc_colstr_t* cs, const ofc_parse_call_arg_list_t* list)
{
return ofc_parse_list_print(
cs, list->count, (const void**)list->call_arg,
(void*)ofc_parse_call_arg_print);
}
|
import React from 'react';
import { Button } from './styles';
interface ISendButtonProps {
show: boolean;
onClick: VoidFunction;
}
const SendButton = ({ show, onClick }: ISendButtonProps) => {
return (
<Button onClick={onClick} show={show}>
<span>Send</span>
</Button>
);
};
export default SendButton;
|
Visual Legacies of Slavery and Emancipation The 150th anniversary of the Emancipation Proclamation provides an occasion to reflect on the ways visual artists have responded to and envisioned the impact of that life-changing declaration on the experience of slavery and the meaning of freedom. Signed into law by President Abraham Lincoln in the midst of the Civil War on January 1, 1863, the Emancipation Proclamation declared, All persons held as slaves are, and henceforward shall be free. But what did it mean to be free? How would freedom take shape? The careful wording and conditions of the Emancipation Proclamation and its timing in relationship to the Civil War is worth noting if we consider how a range of period and contemporary artworks selected for the exhibition We Hold These Truths... chronicle and reflect on its significance over the years.1 For, although the Emancipation Proclamation gave slaves who sought refuge behind Union lines a legal claim to freedom, the freedom it promised required the Union to win the war. Absent from the proclamation were any instructions or provisions on how formerly enslaved people could make their way into a free world. Questions remained about how newly freed slaves would support themselves or build economic self-sufficiency. How might families torn apart by slavery become reunited with loved ones or imagine new communities of their own? When would formerly enslaved people enjoy the full rights of citizenship or at least witness a shift in local and national power relations, so that they might participate in the political system? In We Hold These Truths... both the promise and lack of clarity surrounding these issues inspired artists of the day and today to question and considerto enter into a conversation about the meaning of slavery and emancipation. These conversations, presented as a sort of visual call and response, are reflected in the pairings of works from the past and present and related works on similar themes. A conversation that probes how we make sense of the legacy of emancipation in historic documents and the artistic imagination is one worth having on the occasion of the sesquicentennial of the Emancipation Proclamation. |
// +build windows
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package system
import (
"os/exec"
"strings"
)
// dockerEndpoint is the os specific endpoint for docker communication
const dockerEndpoint = "npipe:////./pipe/docker_engine"
// DefaultSysSpec is the default SysSpec for Windows
var DefaultSysSpec = SysSpec{
OS: "Microsoft Windows Server 2016",
KernelSpec: KernelSpec{
Versions: []string{`10\.0\.1439[3-9]`, `10\.0\.14[4-9][0-9]{2}`, `10\.0\.1[5-9][0-9]{3}`, `10\.0\.[2-9][0-9]{4}`, `10\.[1-9]+\.[0-9]+`}, //requires >= '10.0.14393'
Required: []KernelConfig{},
Optional: []KernelConfig{},
Forbidden: []KernelConfig{},
},
Cgroups: []string{},
RuntimeSpec: RuntimeSpec{
DockerSpec: &DockerSpec{
Version: []string{`18\.06\..*`}, //Requires [18.06] or later
GraphDriver: []string{"windowsfilter"},
},
},
}
// KernelValidatorHelperImpl is the 'windows' implementation of KernelValidatorHelper
type KernelValidatorHelperImpl struct{}
var _ KernelValidatorHelper = &KernelValidatorHelperImpl{}
// GetKernelRelease returns the windows release version (ex. 10.0.14393) as a string
func (o *KernelValidatorHelperImpl) GetKernelReleaseVersion() (string, error) {
args := []string{"(Get-CimInstance Win32_OperatingSystem).Version"}
releaseVersion, err := exec.Command("powershell", args...).Output()
if err != nil {
return "", err
}
return strings.TrimSpace(string(releaseVersion)), nil
}
|
Association of SOD1 and SOD2 single nucleotide polymorphisms with susceptibility to gastric cancer in a Korean population Oxidative stress is accepted as one of the main factors involved in the development and progression of cancer. Superoxide dismutases (SODs) are important in avoiding oxidative stress by eliminating reactive oxygen species (ROS). To determine whether single nucleotide polymorphisms at G7958A within SOD1 and at T5482C within SOD2 are associated with an increased susceptibility to gastric cancer, we investigated the genotype and allele frequencies of the genes from 294 gastric cancer patients and 300 healthy individuals. A polymerase chain reactionsingle strand conformation polymorphism assay was used to identify the SOD1 G7958A and the SOD2 T5482C genotypes. Statistically significant differences in the genotype and allele frequencies of SOD2 T5482C were found between the healthy controls and gastric cancer patients (p = 0.0001 and p < 0.0001, respectively). When the data were stratified according to gastric cancer histological subtypes, the risk of both diffuse and intestinaltype gastric cancer was statistically higher for carriers of the C allele compared with carriers of the T allele. However, there were no statistically significant differences in genotype distribution (p = 0.5069) and allele frequencies (p = 0.3714) of SOD1 G7958A between gastric cancer patients and controls. Our findings suggest that polymorphism of the SOD2 T5482C may be closely associated with an increased susceptibility to the development and differentiation of gastric cancer in the Korean population. |
Some actors method-act, Jenny Erpenbeck method-writes. For her debut novella, the German author enrolled for a month at a secondary school, swapping dresses and high heels for T-shirts and trainers, pretending to be 17 when she was in fact 27. The only person in on the experiment was the head teacher and, a few sceptical glances on day one aside, none of her fellow pupils noticed.
Erpenbeck describes her experiment, carried out in 1994, as a Schnapsidee – an idea so crazy you could only come up with it when drunk. But the book – a haunting, fairytale-esque story called The Old Child, eventually published in 1999 – garnered her a reputation as one of the more weighty literary talents in Germany. News weekly Der Spiegel tagged her and a small band of other writers “Grass’s grandchildren”, with its cover featuring Erpenbeck resting her head on the kind of tin drum played by Grass’s Oskar Matzerath – perhaps the most famous childhood refusenik in European fiction after Peter Pan.
Last week, Erpenbeck’s sixth novel, The End of Days, won her and her American translator Susan Bernofsky the Independent foreign fiction prize, thus cementing her status as one of the most significant voices writing in Europe today. It is a fitting choice: if The Old Child was a book about a child who refuses to grow up, her latest novel explores what happens when we actually do grow old.
If her latest novel didn’t require the same kind of immersive research as The Old Child, it’s because in The End of Days Erpenbeck allows herself to draw directly on her own biography. Born in East Berlin in 1967, she grew up in a family environment that guaranteed intellectual stimulation. Her father, John, is widely known as a scientist and author; her late mother, Doris Kilias, was a translator from Arabic into German. Her grandparents on her father’s side, Fritz Erpenbeck and Hedda Zinner, were both leading figures in East Germany’s literary establishment and members of the Communist party.
Erpenbeck says the prolific careers of her family members freed her up to pursue other passions; she wasn’t pressured into becoming an author: “What I loved as a child was the feeling that I could go on to do anything.” She trained first as a bookbinder, then as an opera director, working with choreographer Ruth Berghaus, director Heiner Müller and Werner Herzog. Her return to writing took place only by chance, in between jobs in the theatre, while she was selling rolls of bread at the local bakery. Those who look for it carefully can still find traces of dramaturgy in her writing: in The End of Days, people sometimes seem less important than the props they carry through life.
In Germany, the GDR-era family saga has become a literary cliche in recent years. Twenty-five years after the fall of the Berlin Wall, East Germans may be underrepresented in their government’s cabinet, largely absent from the boards of the big stock-market companies, and without a football team in the first division – but in the book charts, they punch above their weight. Since the inception of the German book prize 10 years ago, the biggest accolade in German-language publishing has gone to four novels about family life in the eastern half of the country, including Eugen Ruge’s In Times of Fading Light, recently translated by Faber, and last year’s winner Kruso, by poet Lutz Seiler.
If Erpenbeck’s novels and novellas are “East German”, it expresses itself not so much in content as in form – an urge to break with the conventions of linear storytelling because it simply doesn’t reflect experience. In a country whose borders were redrawn as frequently over the last two centuries as they were in Germany, the promise of a straightforward narrative can seem hollow.
Now aged 48, there is still something of the old child about Erpenbeck, a sense of a wiser mind inhabiting a younger body. She can light up with enthusiasm about a new singer-songwriter she has discovered or her love of Earl Grey tea. But she is incapable, or unwilling, to skirt over politics in the way other authors of her generation often do. |
def create_ack(self, sender):
packet = Packet()
packet.TimeToLive = 0
packet.SequenceNumber = self.SequenceNumber
packet.IsAcknowledgement = True
packet.IsReliable = False
packet.SenderID = sender
packet.Data = ''
return packet |
The Netanyahu government has yet to make clear how it will negotiate Israel has told the European Union to stop criticising Benjamin Netanyahu's government or risk being excluded from future Middle East peace negotiations. A foreign ministry official called EU envoys in Israel after a commissioner in Brussels suggested freezing a move to upgrade EU-Israeli relations. The commissioner said Netanyahu should commit to talks with the Palestinians. The warning comes ahead of the first European trip by Avigdor Lieberman, Israel's new foreign minister. Israeli media say the warnings have been issued by the deputy director for European affairs at the Israeli foreign ministry, Rafi Barak. His main target the EU External Affairs Commissioner Benita Ferrero-Waldner. If these declarations continue, Europe will not be able to be part of the diplomatic process
Rafi Barak
Is Israel heading for clash with US? The UK embassy in Tel Aviv has confirmed it was contacted by Mr Barak but refused to disclose details of the conversation. "We want the European Union to be a partner but it is important to hold a mature and discreet dialogue and not to resort to public declarations," Rafi Barak reportedly told diplomats, according to a report in Haaretz. He concluded by "warning" that Europe's influence in the area would be undermined. "Israel is asking Europe to lower the tone and conduct a discreet dialogue," Rafi Barak is quoted saying. "However, if these declarations continue, Europe will not be able to be part of the diplomatic process, and both sides will lose." Correspondents say it is far from clear whether Ms Ferrero-Waldner was expressing an official view of the European Union towards Israel . Israeli officials have told the BBC that they requested a month-long postponement of a ministerial-level meeting in May which discusses the EU-Israeli Association agreement regulating bilateral ties. The postponement "is to allow the new government time to formulate its policies" before the meeting, foreign ministry officials said. Prime Minister Benjamin Netanyahu has so far refused to back the principle of a Palestinian state while Foreign Minister Avigdor Lieberman has said the Israeli-Palestinian peace process is a "dead end".
Bookmark with: Delicious
Digg
reddit
Facebook
StumbleUpon What are these? E-mail this to a friend Printable version |
A parallel computing method for fault diagnosis based on Hadoop To improve the efficiency and the speed of fault diagnosis for multimode process under big data environment, a new parallel computation fault diagnosis approach, called GFCM-VMD-HM, is proposed for multimodal TE process based on Hadoop platform. Firstly, the global fuzzy c-means (GFCM) clustering algorithm is applied to distinguish the operation modes. Then, Variational Mode Decomposition(VMD) algorithm is introduced for the sake of filtering. After that, the data are encoded string files and put to Hadoop Distributed File System (HDFS). Finally, character statistics program is written to output multiple files under MapReduce data frame to complete the fault feature extraction and diagnosis. An experimental study is carried out to evaluate the performance of the proposed algorithm. The results obtained show that the method saves fault diagnosis time by means of parallel computing. |
use crate::{compiler::Compiler, compiler_stack::CompilerKind, visitor::NodeVisitor};
use dice_core::{
error::{
codes::INVALID_EXPORT_USAGE,
context::{Context, ContextKind, EXPORT_ONLY_ALLOWED_IN_MODULES, EXPORT_ONLY_ALLOWED_IN_TOP_LEVEL_SCOPE},
Error,
},
protocol::{module::EXPORT, ProtocolSymbol},
tags,
};
use dice_syntax::{ExportDecl, SyntaxNode, VarDecl, VarDeclKind};
impl NodeVisitor<&ExportDecl> for Compiler {
fn visit(&mut self, node: &ExportDecl) -> Result<(), Error> {
if !matches!(self.context()?.kind(), CompilerKind::Module) {
return Err(Error::new(INVALID_EXPORT_USAGE)
.push_context(
Context::new(EXPORT_ONLY_ALLOWED_IN_MODULES, ContextKind::Note).with_tags(tags! {
kind => self.context()?.kind().to_string()
}),
)
.with_source(self.source.clone())
.with_span(node.span));
}
if self.context()?.scope_stack().top_mut()?.depth > 1 {
return Err(Error::new(INVALID_EXPORT_USAGE)
.push_context(Context::new(EXPORT_ONLY_ALLOWED_IN_TOP_LEVEL_SCOPE, ContextKind::Note))
.with_source(self.source.clone())
.with_span(node.span));
}
let export_slot = self
.context()?
.scope_stack()
.local(EXPORT.get())
.expect("#export should always be defined in modules.")
.slot as u8;
self.assembler()?.load_local(export_slot, node.span);
self.visit(node.export)?;
let field_name = match self.syntax_tree.get(node.export) {
SyntaxNode::VarDecl(VarDecl {
kind: VarDeclKind::Singular(name),
..
}) => name.clone(),
SyntaxNode::FnDecl(fn_decl) => fn_decl.name.identifier.clone(),
SyntaxNode::ClassDecl(class_decl) => class_decl.name.identifier.clone(),
SyntaxNode::LitIdent(lit_ident) => lit_ident.identifier.clone(),
_ => unreachable!("Invalid export node type encountered."),
};
self.assembler()?.store_field(field_name, node.span)?;
Ok(())
}
}
|
<gh_stars>0
/*
* Copyright 2022 <NAME>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package dev.aherscu.qa.jgiven.commons.model;
import java.time.*;
import javax.annotation.concurrent.*;
import dev.aherscu.qa.jgiven.commons.actions.*;
import dev.aherscu.qa.jgiven.commons.fixtures.*;
import dev.aherscu.qa.jgiven.commons.utils.*;
import dev.aherscu.qa.jgiven.commons.verifications.*;
import dev.aherscu.qa.tester.utils.*;
/**
* Strongly typed scenario. Ensures that all stages are of same type.
*
* @author aherscu
*
* @param <GIVEN>
* the fixtures stage
* @param <WHEN>
* the actions stage
* @param <THEN>
* the verifications stage
* @param <T>
* type of scenario to be enforced for all stages
*/
@ThreadSafe
public class TypedScenarioTest<T extends AnyScenarioType, GIVEN extends GenericFixtures<T, ?> & ScenarioType<T>, WHEN extends GenericActions<T, ?> & ScenarioType<T>, THEN extends GenericVerifications<T, ?> & ScenarioType<T>>
extends ScenarioTestEx<GIVEN, WHEN, THEN> {
/**
* Base64-encoded epoch-milliseconds of this run using
* {@link Instant#toEpochMilli()}.
*/
public static final String EPOCH_MILLI_64 =
Base64Utils.encode(Instant.now().toEpochMilli());
}
|
<reponame>Wordseer/wordseer
"""Mixins for the database models.
"""
from sqlalchemy import inspect
from sqlalchemy.orm import ColumnProperty
class NonPrimaryKeyEquivalenceMixin(object):
"""A mixin for models that should be compared by their non-primary key
fields.
"""
def __eq__(self, other):
"""If all non-primary key fields of these objects are the same, then
these objects are equivalent. Borrwed from sqlachemy-utils.
"""
for prop in inspect(other.__class__).iterate_properties:
if not isinstance(prop, ColumnProperty):
continue
if prop.columns[0].primary_key:
continue
if not getattr(other, prop.key) == getattr(self, prop.key):
return False
return True
|
Plasma norepinephrine in sleep apnea syndrome. In 8 male patients suffering from sleep apnea syndrome, plasma norepinephrine (NE) levels were examined. At 22.00 and at 6.30 blood samples were obtained. In 3 cases, plasma NE levels displayed little changes between 22.00 and 6.30. In 2 cases, plasma NE levels at 6.30 increased 20% compared with those at 22.00. In 3 cases, plasma NE levels at 6.30 increased more than 40%. No significant correlation between apnea index and plasma NE levels was observed. Total time under 90% arterial oxygen saturation (SaO2) significantly correlated with the ratio of plasma NE levels at 6.30 to those at 22.00. In 2 cases of the 8 patients blood samples were drawn hourly (22.00-6.00) and at 6.30. In these 2 cases, when SaO2 decreased, plasma NE levels tended to increase. It is concluded that in sleep apnea syndrome plasma NE levels increased during sleep and did not correlate with apnea index but with oxygen desaturation. |
<gh_stars>0
// SPDX-License-Identifier: Apache-2.0
// Copyright (C) 2018 IBM Corp.
#include "config.h"
#include "hiomap.hpp"
#include <endian.h>
#include <host-ipmid/ipmid-api.h>
#include <cstring>
#include <fstream>
#include <functional>
#include <host-ipmid/ipmid-host-cmd-utils.hpp>
#include <host-ipmid/ipmid-host-cmd.hpp>
#include <iostream>
#include <map>
#include <phosphor-logging/log.hpp>
#include <sdbusplus/bus.hpp>
#include <sdbusplus/bus/match.hpp>
#include <sdbusplus/exception.hpp>
#include <string>
#include <tuple>
#include <utility>
using namespace sdbusplus;
using namespace phosphor::host::command;
static void register_openpower_hiomap_commands() __attribute__((constructor));
namespace openpower
{
namespace flash
{
constexpr auto BMC_EVENT_DAEMON_READY = 1 << 7;
constexpr auto BMC_EVENT_FLASH_CTRL_LOST = 1 << 6;
constexpr auto BMC_EVENT_WINDOW_RESET = 1 << 1;
constexpr auto BMC_EVENT_PROTOCOL_RESET = 1 << 0;
constexpr auto IPMI_CMD_HIOMAP_EVENT = 0x0f;
constexpr auto HIOMAPD_SERVICE = "xyz.openbmc_project.Hiomapd";
constexpr auto HIOMAPD_OBJECT = "/xyz/openbmc_project/Hiomapd";
constexpr auto HIOMAPD_IFACE = "xyz.openbmc_project.Hiomapd.Protocol";
constexpr auto HIOMAPD_IFACE_V2 = "xyz.openbmc_project.Hiomapd.Protocol.V2";
constexpr auto DBUS_IFACE_PROPERTIES = "org.freedesktop.DBus.Properties";
struct hiomap
{
bus::bus* bus;
/* Signals */
bus::match::match* properties;
bus::match::match* window_reset;
bus::match::match* bmc_reboot;
/* Protocol state */
std::map<std::string, int> event_lookup;
uint8_t bmc_events;
uint8_t seq;
};
/* TODO: Replace get/put with packed structs and direct assignment */
template <typename T>
static inline T get(void* buf)
{
T t;
std::memcpy(&t, buf, sizeof(t));
return t;
}
template <typename T>
static inline void put(void* buf, T&& t)
{
std::memcpy(buf, &t, sizeof(t));
}
typedef ipmi_ret_t (*hiomap_command)(ipmi_request_t req, ipmi_response_t resp,
ipmi_data_len_t data_len,
ipmi_context_t context);
struct errno_cc_entry
{
int err;
int cc;
};
static const errno_cc_entry errno_cc_map[] = {
{0, IPMI_CC_OK},
{EBUSY, IPMI_CC_BUSY},
{ENOTSUP, IPMI_CC_INVALID},
{ETIMEDOUT, 0xc3}, /* FIXME: Replace when defined in ipmid-api.h */
{ENOSPC, 0xc4}, /* FIXME: Replace when defined in ipmid-api.h */
{EINVAL, IPMI_CC_PARM_OUT_OF_RANGE},
{ENODEV, IPMI_CC_SENSOR_INVALID},
{EPERM, IPMI_CC_INSUFFICIENT_PRIVILEGE},
{EACCES, IPMI_CC_INSUFFICIENT_PRIVILEGE},
{-1, IPMI_CC_UNSPECIFIED_ERROR},
};
static int hiomap_xlate_errno(int err)
{
const errno_cc_entry* entry = &errno_cc_map[0];
while (!(entry->err == err || entry->err == -1))
{
entry++;
}
return entry->cc;
}
static void ipmi_hiomap_event_response(IpmiCmdData cmd, bool status)
{
using namespace phosphor::logging;
if (!status)
{
log<level::ERR>("Failed to deliver host command",
entry("SEL_COMMAND=%x:%x", cmd.first, cmd.second));
}
}
static int hiomap_handle_property_update(struct hiomap* ctx,
sdbusplus::message::message& msg)
{
std::map<std::string, sdbusplus::message::variant<bool>> msgData;
std::string iface;
msg.read(iface, msgData);
for (auto const& x : msgData)
{
if (!ctx->event_lookup.count(x.first))
{
/* Unsupported event? */
continue;
}
uint8_t mask = ctx->event_lookup[x.first];
auto value = sdbusplus::message::variant_ns::get<bool>(x.second);
if (value)
{
ctx->bmc_events |= mask;
}
else
{
ctx->bmc_events &= ~mask;
}
}
auto cmd = std::make_pair(IPMI_CMD_HIOMAP_EVENT, ctx->bmc_events);
ipmid_send_cmd_to_host(std::make_tuple(cmd, ipmi_hiomap_event_response));
return 0;
}
static bus::match::match hiomap_match_properties(struct hiomap* ctx)
{
auto properties =
bus::match::rules::propertiesChanged(HIOMAPD_OBJECT, HIOMAPD_IFACE_V2);
bus::match::match match(
*ctx->bus, properties,
std::bind(hiomap_handle_property_update, ctx, std::placeholders::_1));
return match;
}
static int hiomap_handle_signal_v2(struct hiomap* ctx, const char* name)
{
ctx->bmc_events |= ctx->event_lookup[name];
auto cmd = std::make_pair(IPMI_CMD_HIOMAP_EVENT, ctx->bmc_events);
ipmid_send_cmd_to_host(std::make_tuple(cmd, ipmi_hiomap_event_response));
return 0;
}
static bus::match::match hiomap_match_signal_v2(struct hiomap* ctx,
const char* name)
{
using namespace bus::match;
auto signals = rules::type::signal() + rules::path(HIOMAPD_OBJECT) +
rules::interface(HIOMAPD_IFACE_V2) + rules::member(name);
bus::match::match match(*ctx->bus, signals,
std::bind(hiomap_handle_signal_v2, ctx, name));
return match;
}
static ipmi_ret_t hiomap_reset(ipmi_request_t request, ipmi_response_t response,
ipmi_data_len_t data_len, ipmi_context_t context)
{
struct hiomap* ctx = static_cast<struct hiomap*>(context);
auto m = ctx->bus->new_method_call(HIOMAPD_SERVICE, HIOMAPD_OBJECT,
HIOMAPD_IFACE, "Reset");
try
{
ctx->bus->call(m);
*data_len = 0;
}
catch (const exception::SdBusError& e)
{
return hiomap_xlate_errno(e.get_errno());
}
return IPMI_CC_OK;
}
static ipmi_ret_t hiomap_get_info(ipmi_request_t request,
ipmi_response_t response,
ipmi_data_len_t data_len,
ipmi_context_t context)
{
struct hiomap* ctx = static_cast<struct hiomap*>(context);
if (*data_len < 1)
{
return IPMI_CC_REQ_DATA_LEN_INVALID;
}
uint8_t* reqdata = (uint8_t*)request;
auto m = ctx->bus->new_method_call(HIOMAPD_SERVICE, HIOMAPD_OBJECT,
HIOMAPD_IFACE, "GetInfo");
m.append(reqdata[0]);
try
{
auto reply = ctx->bus->call(m);
uint8_t version;
uint8_t blockSizeShift;
uint16_t timeout;
reply.read(version, blockSizeShift, timeout);
uint8_t* respdata = (uint8_t*)response;
/* FIXME: Assumes v2! */
put(&respdata[0], version);
put(&respdata[1], blockSizeShift);
put(&respdata[2], htole16(timeout));
*data_len = 4;
}
catch (const exception::SdBusError& e)
{
return hiomap_xlate_errno(e.get_errno());
}
return IPMI_CC_OK;
}
static ipmi_ret_t hiomap_get_flash_info(ipmi_request_t request,
ipmi_response_t response,
ipmi_data_len_t data_len,
ipmi_context_t context)
{
struct hiomap* ctx = static_cast<struct hiomap*>(context);
auto m = ctx->bus->new_method_call(HIOMAPD_SERVICE, HIOMAPD_OBJECT,
HIOMAPD_IFACE_V2, "GetFlashInfo");
try
{
auto reply = ctx->bus->call(m);
uint16_t flashSize, eraseSize;
reply.read(flashSize, eraseSize);
uint8_t* respdata = (uint8_t*)response;
put(&respdata[0], htole16(flashSize));
put(&respdata[2], htole16(eraseSize));
*data_len = 4;
}
catch (const exception::SdBusError& e)
{
return hiomap_xlate_errno(e.get_errno());
}
return IPMI_CC_OK;
}
static ipmi_ret_t hiomap_create_window(struct hiomap* ctx, bool ro,
ipmi_request_t request,
ipmi_response_t response,
ipmi_data_len_t data_len)
{
if (*data_len < 4)
{
return IPMI_CC_REQ_DATA_LEN_INVALID;
}
uint8_t* reqdata = (uint8_t*)request;
auto windowType = ro ? "CreateReadWindow" : "CreateWriteWindow";
auto m = ctx->bus->new_method_call(HIOMAPD_SERVICE, HIOMAPD_OBJECT,
HIOMAPD_IFACE_V2, windowType);
m.append(le16toh(get<uint16_t>(&reqdata[0])));
m.append(le16toh(get<uint16_t>(&reqdata[2])));
try
{
auto reply = ctx->bus->call(m);
uint16_t lpcAddress, size, offset;
reply.read(lpcAddress, size, offset);
uint8_t* respdata = (uint8_t*)response;
/* FIXME: Assumes v2! */
put(&respdata[0], htole16(lpcAddress));
put(&respdata[2], htole16(size));
put(&respdata[4], htole16(offset));
*data_len = 6;
}
catch (const exception::SdBusError& e)
{
return hiomap_xlate_errno(e.get_errno());
}
return IPMI_CC_OK;
}
static ipmi_ret_t hiomap_create_read_window(ipmi_request_t request,
ipmi_response_t response,
ipmi_data_len_t data_len,
ipmi_context_t context)
{
struct hiomap* ctx = static_cast<struct hiomap*>(context);
return hiomap_create_window(ctx, true, request, response, data_len);
}
static ipmi_ret_t hiomap_create_write_window(ipmi_request_t request,
ipmi_response_t response,
ipmi_data_len_t data_len,
ipmi_context_t context)
{
struct hiomap* ctx = static_cast<struct hiomap*>(context);
return hiomap_create_window(ctx, false, request, response, data_len);
}
static ipmi_ret_t hiomap_close_window(ipmi_request_t request,
ipmi_response_t response,
ipmi_data_len_t data_len,
ipmi_context_t context)
{
struct hiomap* ctx = static_cast<struct hiomap*>(context);
if (*data_len < 1)
{
return IPMI_CC_REQ_DATA_LEN_INVALID;
}
uint8_t* reqdata = (uint8_t*)request;
auto m = ctx->bus->new_method_call(HIOMAPD_SERVICE, HIOMAPD_OBJECT,
HIOMAPD_IFACE_V2, "CloseWindow");
m.append(reqdata[0]);
try
{
auto reply = ctx->bus->call(m);
*data_len = 0;
}
catch (const exception::SdBusError& e)
{
return hiomap_xlate_errno(e.get_errno());
}
return IPMI_CC_OK;
}
static ipmi_ret_t hiomap_mark_dirty(ipmi_request_t request,
ipmi_response_t response,
ipmi_data_len_t data_len,
ipmi_context_t context)
{
struct hiomap* ctx = static_cast<struct hiomap*>(context);
if (*data_len < 4)
{
return IPMI_CC_REQ_DATA_LEN_INVALID;
}
uint8_t* reqdata = (uint8_t*)request;
auto m = ctx->bus->new_method_call(HIOMAPD_SERVICE, HIOMAPD_OBJECT,
HIOMAPD_IFACE_V2, "MarkDirty");
/* FIXME: Assumes v2 */
m.append(le16toh(get<uint16_t>(&reqdata[0]))); /* offset */
m.append(le16toh(get<uint16_t>(&reqdata[2]))); /* size */
try
{
auto reply = ctx->bus->call(m);
*data_len = 0;
}
catch (const exception::SdBusError& e)
{
return hiomap_xlate_errno(e.get_errno());
}
return IPMI_CC_OK;
}
static ipmi_ret_t hiomap_flush(ipmi_request_t request, ipmi_response_t response,
ipmi_data_len_t data_len, ipmi_context_t context)
{
struct hiomap* ctx = static_cast<struct hiomap*>(context);
auto m = ctx->bus->new_method_call(HIOMAPD_SERVICE, HIOMAPD_OBJECT,
HIOMAPD_IFACE_V2, "Flush");
try
{
/* FIXME: No argument call assumes v2 */
auto reply = ctx->bus->call(m);
*data_len = 0;
}
catch (const exception::SdBusError& e)
{
return hiomap_xlate_errno(e.get_errno());
}
return IPMI_CC_OK;
}
static ipmi_ret_t hiomap_ack(ipmi_request_t request, ipmi_response_t response,
ipmi_data_len_t data_len, ipmi_context_t context)
{
struct hiomap* ctx = static_cast<struct hiomap*>(context);
if (*data_len < 1)
{
return IPMI_CC_REQ_DATA_LEN_INVALID;
}
uint8_t* reqdata = (uint8_t*)request;
auto m = ctx->bus->new_method_call(HIOMAPD_SERVICE, HIOMAPD_OBJECT,
HIOMAPD_IFACE_V2, "Ack");
auto acked = reqdata[0];
m.append(acked);
try
{
auto reply = ctx->bus->call(m);
/* Update our cache: Necessary because the signals do not carry a value
*/
ctx->bmc_events &= ~acked;
*data_len = 0;
}
catch (const exception::SdBusError& e)
{
return hiomap_xlate_errno(e.get_errno());
}
return IPMI_CC_OK;
}
static ipmi_ret_t hiomap_erase(ipmi_request_t request, ipmi_response_t response,
ipmi_data_len_t data_len, ipmi_context_t context)
{
struct hiomap* ctx = static_cast<struct hiomap*>(context);
if (*data_len < 4)
{
return IPMI_CC_REQ_DATA_LEN_INVALID;
}
uint8_t* reqdata = (uint8_t*)request;
auto m = ctx->bus->new_method_call(HIOMAPD_SERVICE, HIOMAPD_OBJECT,
HIOMAPD_IFACE_V2, "Erase");
/* FIXME: Assumes v2 */
m.append(le16toh(get<uint16_t>(&reqdata[0]))); /* offset */
m.append(le16toh(get<uint16_t>(&reqdata[2]))); /* size */
try
{
auto reply = ctx->bus->call(m);
*data_len = 0;
}
catch (const exception::SdBusError& e)
{
return hiomap_xlate_errno(e.get_errno());
}
return IPMI_CC_OK;
}
#define HIOMAP_C_RESET 1
#define HIOMAP_C_GET_INFO 2
#define HIOMAP_C_GET_FLASH_INFO 3
#define HIOMAP_C_CREATE_READ_WINDOW 4
#define HIOMAP_C_CLOSE_WINDOW 5
#define HIOMAP_C_CREATE_WRITE_WINDOW 6
#define HIOMAP_C_MARK_DIRTY 7
#define HIOMAP_C_FLUSH 8
#define HIOMAP_C_ACK 9
#define HIOMAP_C_ERASE 10
static const hiomap_command hiomap_commands[] = {
[0] = NULL, /* Invalid command ID */
[HIOMAP_C_RESET] = hiomap_reset,
[HIOMAP_C_GET_INFO] = hiomap_get_info,
[HIOMAP_C_GET_FLASH_INFO] = hiomap_get_flash_info,
[HIOMAP_C_CREATE_READ_WINDOW] = hiomap_create_read_window,
[HIOMAP_C_CLOSE_WINDOW] = hiomap_close_window,
[HIOMAP_C_CREATE_WRITE_WINDOW] = hiomap_create_write_window,
[HIOMAP_C_MARK_DIRTY] = hiomap_mark_dirty,
[HIOMAP_C_FLUSH] = hiomap_flush,
[HIOMAP_C_ACK] = hiomap_ack,
[HIOMAP_C_ERASE] = hiomap_erase,
};
/* FIXME: Define this in the "right" place, wherever that is */
/* FIXME: Double evaluation */
#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
static ipmi_ret_t hiomap_dispatch(ipmi_netfn_t netfn, ipmi_cmd_t cmd,
ipmi_request_t request,
ipmi_response_t response,
ipmi_data_len_t data_len,
ipmi_context_t context)
{
struct hiomap* ctx = static_cast<struct hiomap*>(context);
if (*data_len < 2)
{
*data_len = 0;
return IPMI_CC_REQ_DATA_LEN_INVALID;
}
uint8_t* ipmi_req = (uint8_t*)request;
uint8_t* ipmi_resp = (uint8_t*)response;
uint8_t hiomap_cmd = ipmi_req[0];
if (hiomap_cmd == 0 || hiomap_cmd > ARRAY_SIZE(hiomap_commands) - 1)
{
*data_len = 0;
return IPMI_CC_PARM_OUT_OF_RANGE;
}
bool is_unversioned =
(hiomap_cmd == HIOMAP_C_RESET || hiomap_cmd == HIOMAP_C_GET_INFO ||
hiomap_cmd == HIOMAP_C_ACK);
if (!is_unversioned && ctx->seq == ipmi_req[1])
{
*data_len = 0;
return IPMI_CC_INVALID_FIELD_REQUEST;
}
ctx->seq = ipmi_req[1];
uint8_t* flash_req = ipmi_req + 2;
size_t flash_len = *data_len - 2;
uint8_t* flash_resp = ipmi_resp + 2;
ipmi_ret_t cc =
hiomap_commands[hiomap_cmd](flash_req, flash_resp, &flash_len, context);
if (cc != IPMI_CC_OK)
{
*data_len = 0;
return cc;
}
/* Populate the response command and sequence */
ipmi_resp[0] = hiomap_cmd;
ipmi_resp[1] = ctx->seq;
*data_len = flash_len + 2;
return cc;
}
} // namespace flash
} // namespace openpower
static void register_openpower_hiomap_commands()
{
using namespace openpower::flash;
/* FIXME: Clean this up? Can we unregister? */
struct hiomap* ctx = new hiomap();
/* Initialise mapping from signal and property names to status bit */
ctx->event_lookup["DaemonReady"] = BMC_EVENT_DAEMON_READY;
ctx->event_lookup["FlashControlLost"] = BMC_EVENT_FLASH_CTRL_LOST;
ctx->event_lookup["WindowReset"] = BMC_EVENT_WINDOW_RESET;
ctx->event_lookup["ProtocolReset"] = BMC_EVENT_PROTOCOL_RESET;
ctx->bus = new bus::bus(ipmid_get_sd_bus_connection());
/* Initialise signal handling */
/*
* Can't use temporaries here because that causes SEGFAULTs due to slot
* destruction (!?), so enjoy the weird wrapping.
*/
ctx->properties =
new bus::match::match(std::move(hiomap_match_properties(ctx)));
ctx->bmc_reboot = new bus::match::match(
std::move(hiomap_match_signal_v2(ctx, "ProtocolReset")));
ctx->window_reset = new bus::match::match(
std::move(hiomap_match_signal_v2(ctx, "WindowReset")));
ipmi_register_callback(NETFUN_IBM_OEM, IPMI_CMD_HIOMAP, ctx,
openpower::flash::hiomap_dispatch, SYSTEM_INTERFACE);
}
|
<filename>ch02/calcs.c
#include <stdio.h>
int main() {
int num1, num2;
printf("Please enter two numbers, separated by a space: ");
scanf("%d %d", &num1, &num2);
printf("%d + %d is %d\n", num1, num2, num1 + num2);
printf("%d - %d is %d\n", num1, num2, num1 - num2);
printf("%d * %d is %d\n", num1, num2, num1 * num2);
printf("%d / %d is %d\n", num1, num2, num1 / num2);
printf("%d %% %d is %d\n", num1, num2, num1 % num2);
} |
Paramagnetic 1H-NMR relaxation probes of stereoselectivity in metalloporphyrin catalyzed olefin epoxidation. Enantioselective catalytic epoxidation of olefins is an important problem from both practical and mechanistic points of view. The origins of chiral induction by asymmetric porphyrin and salen complexes were investigated by FT-NMR T1 relaxation techniques. A new chiral vaulted porphyrin that carries (S)-binaphthyl-L-alanine straps across both faces of the porphyrin macrocycle was synthesized and characterized. (R)-styrene oxide was obtained in > 90% ee in the initial stages of styrene epoxidation with F5PhIO catalyzed by 1-Fe(III)Cl. The transition state for olefin epoxidation with high-valent metal-oxo species was modeled by coordinating epoxides to paramagnetic copper complexes of the corresponding ligands. The epoxide enantiomer that better fit the chiral cavity of the catalyst, as revealed by T1 relaxation measurements, was also the major product of catalytic olefin epoxidation. These results are consistent with the "lock-and-key" mechanism of asymmetric catalysis by metalloporphyrins. The copper complex of a chiral salen ligand showed no differentiation in terms of T1 relaxation rates between the enantiomers of cis-beta-methylstyrene oxide in contrast to the high enantioselectivity observed for catalytic epoxidation. |
What Will The Warriors Look Like This Time Next Year?
The Golden State Warriors offseason couldn’t have gone better. First Lebron James signed for the Los Angeles Lakers without assembling a deadly superteam (yet). Then the Warriors broke the internet by using the taxpayer Mid Level Exception to sign DeMarcus Cousins. Now their biggest rival, the Houston Rockets, appears hellbent on signing Carmelo Anthony, which coupled with the losses of versatile, two-way wings Trevor Ariza and Luc Richard Mbah A Moute, looks like a big step backward.
Meanwhile they brought Kevin Durant and Kevon Looney back on good value contracts, signed head coach Steve Kerr to a long-term extension, bolstered their wing depth by drafting Jacob Evans, and added another piece that should exceed the value they are paying for him in Jonas Jerebko, who they picked up at the minimum after his $4M contract was waived by the Utah Jazz.
They will enter the 2018-19 season as prohibitive title favorites. That is quite something when you consider that no team has been to five straight NBA Finals, and won four out of five titles, since Bill Russell’s Boston Celtics in the very infancy of the league.
But fear not NBA fans. Almost more than whatever happens in June 2019, next July looks to be a legacy-defining summer for Golden State. As the Warriors prepare to enter their new arena complex, the Chase Center, almost their entire roster can become free agents. This all happens just as the Warriors hit the dreaded repeater tax, which increases the luxury tax payments teams have to pay for exceeding the cap. This year could be peak Warriors.
Here’s an early look at which of their core All-Stars may be on their roster next year.
Never say never. The NBA is a crazy business. But Steph Curry being on the Warriors in a year’s time is about as certain as the NBA can be. He’s locked down for another four years on his $200m contract, and he looks very much like he will emulate Tim Duncan and Kobe Bryant as the face of a single team dynasty for his whole career. It would take something truly mind-blowing for Curry to leave the Warriors now.
This is where things start getting interesting. Kevin Durant can opt out to become a free agent again next year after signing a one-plus-one deal based on a 20% raise from his discounted salary last year. It saved the Warriors over $20m in salary and taxes combined, but they’d likely much rather have taken the hit from the higher salary he could have taken if he signed a multi-year contract this summer.
Does that mean he’s out the door next year? The chatter has already started, with the New York Knicks being pushed by media voices as respected as Rachel Nichols. If you can ignore the idea that anyone can actually save the Knicks, then it kind of makes sense. He gets to have his own team, be the savior of a moribund franchise, and rehabilitate his image in the eyes of those who still can't get over his move to the Warriors (if that even matters at this point). If wins the title with Golden State this year he’ll have been a key part of the first three-peat since the 2000-2002 Los Angeles Lakers, cementing his legacy.
But there is another reason he may have taken the shorter deal. This offseason the Warriors only possessed his ‘Early Bird rights’ which meant the most they could pay him was a four year deal with 5% raises, at 35% of $101.9m salary cap. Next summer they will possess his ‘Full Bird rights’ which means they can pay him a five year deal with 8% annual raises. The cap is expected to skyrocket up to $109m, meaning that the full contract he could sign could be as much as $221m for five years. Not bad money for a player who will be turning 31.
Those ‘Early Bird’ contracts have to be a minimum of two years in length, meaning if Durant took the higher amount this summer that deal would have offered, he would have postponed getting the mega-contract he’s line for. So either he’s taken less now so he can leave next year, or he’s taken less now so he can sign a mega-deal next year. Or perhaps he wants to do something in between, say a three year deal at that mega number, leaving him open to departing for another team in his mid-30s when he should still be capable of carrying a franchise himself.
The truth is nobody knows what Kevin Durant will want in a year’s time and it’s pointless to speculate. There will be lucrative offers for him in exciting new scenarios. But no-one else will be able to offer him more money, or likely a better chance of winning, than the Warriors. Plus, you’d think he’d want to play at least a couple of years in the new Chase Center Arena and really cement this dynasty as one of the all-time greats.
If Durant is the first domino to fall, then Klay Thompson is the next big piece. Unless he signs an extension this summer Thompson can become a free agent next year. He’s rumored to be high on the Lakers list, who hope to tempt him away to follow in his father’s footsteps, a former Laker himself. The truth is that the Warriors likely will not pay Thompson the maximum amount he can receive, so Los Angeles could offer a bigger paycheck.
So why is his probability of being a Warrior so high? The thing about Klay Thompson is that he loves being a Warrior. He’s repeatedly said he doesn’t want to go anywhere, will consider taking a discount, and his former Laker father has repeatedly said his son will re-sign next summer.
The big question for Golden State is how much of a discount is Thompson prepared to take. As a free agent with eight years of experience going into the 2019 offseason, he’s eligible for a contract starting at 30% of the cap. That means a starting salary of around $32.7m. Given the astronomical tax bills the Warriors are facing, that’s likely too high a price to pay. Marcus Thompson of the Athletic reported a couple of months ago that discussions on a contract extension had taken place, but there's been no update since. One major factor is that an extension signed this summer would start at 120% of Klay Thompson's current salary so the most he could make in the 2019-20 season would be around $23m. There is still a chance that an extension does get signed this offseason, but it’s a massive discount from what he could get.
Perhaps the most reasonable scenario is both sides agree to wait until next summer and split the difference, signing Thompson to a deal starting at around $28m, and the Splash Brothers remain in full effect for another half-decade.
If the question were Draymond Green’s probability of being a Warrior in August 2020, then the probability would likely be lower. Green’s contract runs through July 2020, so along with Steph Curry and rookie Jacob Evans, he’s one of the few Warriors actually under contract next summer.
Earlier in the summer, the Warriors said they are trying to sign him to an extension this year, but that appears much more unlikely than Thompson signing one. Green could well be eligible for a veteran designated extension next year if he can win the Defensive Player of the Year trophy, or make an All NBA team, as he achieved both those marks in 2016-17. This would allow the Warriors to offer him a five-year deal starting at 35% of the cap. So it’s no surprise that ESPN reported earlier this summer sources saying Green wouldn’t take a discount. It makes no sense for him to sign an extension now when a much more lucrative payday could be in store.
However, the thing about these designated veteran max contracts is that they are humungous. In Green’s case, it would mean paying him around $50m when he’s 35 years old. As crucial as Green is to this dynasty, and he is absolutely essential, that is a very steep price to pay for an undersized big man. As a point of reference Ben Wallace, another undersized defensive beast started declining around his age 33 season.
The Warriors will absolutely want to keep Green around for another few seasons if they can. He is their emotional leader, their fire and their fury, and the very heartbeat of the team. He has revolutionized NBA defenses in the way Curry has revolutionized NBA offenses.
The choices they will face next summer, or the year after, with Green will likely come down to a "years vs dollars" question. It’s important to remember that just because they may be able to offer him 35% of the cap, it doesn’t mean they have to. No-one else can offer Green that amount as a free agent. So something closer to 30% of the cap, for say three years, would still be a massive offer of over $100m in total but protect their exposure on the back end of the deal, and save them some of the astronomical tax payments they are facing. Alternatively, they could give him more years than anyone else, but at a lower amount, which would help manage the annual tax bills but mean paying Green for his later years when he could be in decline.
The wildcard in all this is what happens if a player such as Anthony Davis becomes available via trade at some point in the next couple of years. The Warriors are widely rumored to be following his situation closely, but to acquire him it would take a big offer including one of Thompson or Green.
Still, as things stand right now, it looks likely that Green will still be a Warrior for one more year at least this time next summer.
In the next few days, I’ll take a look at the Warriors supporting cast and who might survive the coming luxury tax crunch. |
import { AppError } from '../models'
export const appErrorTree = {
ID_HAS_ALREADY_BEEN_USED: {
name: 'Transaction Error',
message: 'This ID has already been used.',
} as AppError,
DATA_EXISTED: {
name: 'Transaction Error',
message: 'data existed.',
} as AppError,
DATA_NOT_EXISTED: {
name: 'Transaction Error',
message: 'data not existed.',
} as AppError,
NOT_IMPLEMENTED: {
name: 'Not Implemented Error',
message: 'To be implemented.',
} as AppError,
}
|
//! Returns true if the mouse button is Hit or Pressed
bool Input::getMouseButtonSimple( int button ) {
std::unordered_map< int, Input_ButtonState >::iterator iter = m_mouse.find( button );
if (iter == m_mouse.end()) {
return false;
} else {
if ( ((*iter).second == Input_ButtonState::Hit) || ((*iter).second == Input_ButtonState::Pressed) ) {
return true;
} else {
return false;
}
}
} |
There will be a few familiar faces when Cambridge United face St Neots Town this weekend .
As well as Dylan Williams playing against the U’s – who he made his only Football League appearance for during their 7-0 win over Morecambe – the likes of goalkeeper Finlay Iron , Jordan Norville-Williams and Tom Knowles have all been on loan at Premier Plus Stadium in recent seasons.
But the one that stands out is, of course, Jevani Brown. The attacking midfielder found the net 19 times in 20 games for the Saints and earned himself a trial at Birmingham City, which was unsuccessful.
Brown started for the Saints as they faced United in pre-season last year, with Clement saying he was the best player on the pitch on the day.
The U’s coaching staff must have thought the same, taking him on trial before offering him a two-year deal.
Clements was always outspoken in his support for the midfielder and said even know he is delighted to see how he’s progressed into becoming a firm fan favourite at the Abbey.
“I’m not being disrespectful saying this, but he was the best player on the pitch that day, the most exciting,” he said.
“On the back of that he went away for a couple of weeks and then the rest is history.
“It was so positive the impact he made last season – the boy did fantastic.
The U’s are one of four Football League teams visiting the Premier Plus Stadium – with matches against Peterborough, MK Dons and Stevenage also scheduled - something which Clements believes shows how the club is viewed by their peers.
And Clements also added that he hopes that a few of the United youngsters who could be on show this weekend could make an appearance for the Southern League side this season.
“Of course we would always look to bring in loans,” he said.
“We’ve got a really good relationship with Ben [Strang], Mark [Bonner] and Joe [Dunne].
“Joe will do a pre-season with them and decide what players he wants to send out. We would love someone like Jordan here again next year, or young, attacking players who fit our style. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.